https://wiki.archlinux.org/api.php?action=feedcontributions&user=Cmsigler&feedformat=atomArchWiki - User contributions [en]2024-03-28T14:08:58ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Talk:Arch_User_Repository&diff=758490Talk:Arch User Repository2022-11-30T13:37:04Z<p>Cmsigler: Reply</p>
<hr />
<div>== contribute to existing package ==<br />
what is the best way to contribute to an existing AUR package? i cloned one and tried to push but it gave me a permission error --[[User:Soloturn|Soloturn]] ([[User talk:Soloturn|talk]]) 16:04, 28 January 2019 (UTC)<br />
<br />
:Users are not allowed to modify something owned by another user. It's no different from cloning a Github repository and trying to push to that. The equivalent of submitting an issue would be leaving a comment with a patch file. The AUR platform in particular allows collaboration features -- you may request that a maintainer grant you push access by adding your name as a co-maintainer. If the package is broken or out of date, see [[Arch User Repository#Foo in the AUR is outdated; what should I do?]]<br />
<br />
:This is possibly something that we should make clear in a FAQ entry. -- [[User:Eschwartz|Eschwartz]] ([[User talk:Eschwartz|talk]]) 19:49, 28 January 2019 (UTC)<br />
<br />
::I was thinking about this while writing a [[Talk:Arch User Repository#Proposal: Other requests|proposal regarding "Other requests"]]. It is possible to request a package be disowned with "Orphan"; why not add "Co-maintain" to send a request to ask for permission to assist with a package's maintenance? Of course, it would not be unnecessary to send that request to the mailing list, and there's always the AUR comments or the forums for users to contact a maintainer otherwise; but having the feature built in to the AUR would allow us to add a fourth subsection here to recommend ground rules and possibly expedite the process of adding co-maintainers when packagers are interested in doing so. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 14:45, 6 February 2019 (UTC)<br />
<br />
:::Rather than an FAQ, maybe add a bullet point under "Maintaining packages". Question: Who has the right to use "Manage Co-Maintainers"? [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:07, 27 May 2019 (UTC)<br />
<br />
::::<s>Closing proposal below, now implemented</s>. Leaving discussion open: in the future, we may want to break long bulleted lists like "Rules of Submission" and "Maintaining Packages" into subsections. This would make it more convenient to link to specific points in the list, which in turn would be convenient if we still want an FAQ such as "How can I contribute to an existing package?" (which should link to adopting orphaned packages, commenting on a package, and adding co-maintainters) [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 09:31, 4 June 2019 (UTC)<br />
<br />
:::::Since the reversion, documenting how to add co-maintainers has been absorbed into the proposal for [[User talk:Quequotion/AUR submission guidelines#Maintaining packages|AUR submission guidelines]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 12:07, 21 July 2019 (UTC)<br />
<br />
=== Proposal: How can I contribute to an existing package? ===<br />
{{Comment|No longer clear where this question would fit--splitting the content of the page between a "maintainter-oriented" page and a "user-oriented" page overlooks the fact that AUR package maintainers and AUR users ''may be the same people''.}}<br />
If the package is [[User:Quequotion/AUR submission guidelines#Orphan|orphaned]] you may [[User:Quequotion/AUR submission guidelines#Maintaining packages|adopt it]], otherwise you may post your idea [[User:Quequotion/Arch User Repository#Commenting on packages|in its comments]] or ask to be [[User:Quequotion/AUR submission guidelines#Maintaining packages|appointed as a co-maintainer]].<br />
<br />
== Integrate FAQ content ==<br />
<br />
Truncate FAQs' answers as much as possible, linking to an appropriate page or (proposed) section of the AUR page. Note that some content must be transferred to the [[AUR submission guidelines]].<br />
<br />
If you'd like to discuss the proposal as a whole, do so in this header; use [[Template:Comment|comments]] within individual subsections to discuss them. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:42, 11 June 2019 (UTC)<br />
<br />
If you'd like to see how this page should look, and get a history without other changes, I've restored its [[User:Quequotion/Arch User Repository|full page draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 10:14, 11 June 2019 (UTC)<br />
<br />
:There are a lot of changes to review; so I've compiled a rundown of them [[User talk:Quequotion/Arch User Repository#Breakdown_of_changes|on the talk page of the draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:45, 14 June 2019 (UTC)<br />
<br />
:Please keep drafts on a dedicated page. ([[Special:Diff/575147]]) Closing the sections below. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 13:18, 11 June 2019 (UTC)<br />
<br />
:Update on these proposals: The FAQ integration for [[Arch User Repository]] is more or less completed, however some of the FAQ on this page contain maintainer-oriented content that is better suited to subsections of [[AUR submission guidelines]]. I am aware that proposals for a page should be discussed only on that page's talk page. These proposals date back to when these two pages existed as one, and therefore concern both: several of the FAQ from ''this'' page need to be integrated into subsections of ''that'' page. To ease the review process, I compiled a [[User talk:Quequotion/AUR submission guidelines#Breakdown of changes|breakdown of the proposed changes]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 17:13, 5 January 2021 (UTC)<br />
<br />
== Split FAQ content to Arch User Repository/FAQ page. ==<br />
<br />
Have a look at the ratio of [https://i.postimg.cc/MHHcW5b3/aur-slashfaq.png FAQ to page content].<br />
<br />
I like the the idea of using ''Article''/FAQ for these. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 01:42, 18 June 2019 (UTC)<br />
<br />
: This makes sense to me. [[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 02:16, 18 June 2019 (UTC)<br />
<br />
:: An alternative which doesn't require a new page is merging this to [[FAQ]]. An issue with this approach (presented on IRC) however is that adding AUR content to the "official" FAQ may add some notion of supported-ness for the AUR (and its content in specific). A way around this would be to include the "AUR packages are user produced content. Any use of the provided files is at your own risk." warning as well. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:54, 24 June 2019 (UTC)<br />
<br />
::: This would also add a significant amount of content to the official FAQ page, which might be seen as clutter. However, to be honest I'm more interested in ''having this FAQ relocated'' than ''where it ultimately goes''. Also, not sure if I need to clarify, but this is not exclusive of the [[#Integrate FAQ content]] proposals; it would be in the best interest of wherever the FAQ ends up that it is as small as possible when it gets there. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:10, 21 July 2019 (UTC)<br />
<br />
== Meaning of the Popularity score? ==<br />
<br />
Can someone explain what the meaning of the Popularity score is, and how it's calculated? And maybe add that to the wiki?<br />
It doesn't seem to be derived from the number of votes, as some packages with more votes has a lower popularity than others with a lower vote count.<br />
Maybe it's number of installs? Maybe it's time dependent, so recent votes only temporarily increase popularity?<br />
<br />
I got curious about this as a helper like yay prominently displays this value, but I haven't seen it presented in yay's documentation, or here. Or maybe I skimmed them too fast.<br />
<br />
[[User:Biowaste|Biowaste]] ([[User talk:Biowaste|talk]]) 00:02, 2 November 2019 (UTC)<br />
<br />
:<s>This is one of many issues my proposal (Integrate FAQ content) for this page and the AUR submission guidelines page handles. If you dig around on the current page, you may find what you are looking for--or if someone could approve the changes we could have the information appropriately documented under [[User:Quequotion/Arch User Repository#Feedback|an improved feedback section]] here, and referenced in [[User:Quequotion/AUR submission guidelines#Promoting packages to the community repository|a section about promoting packages to community]] on the AUR submission guidelines page. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:23, 2 November 2019 (UTC)</s><br />
<br />
::Your proposal does not answer the question "What is the meaning of the Popularity score?" at all, so please stop pretending that it is a universal solution for every issue related to AUR documentation.<br />
::On the [https://aur.archlinux.org/packages/ AUR package list] page, the Popularity column is suffixed with a "?" symbol which has an HTML tooltip explaining how the values are calculated.<br />
::-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 10:18, 2 November 2019 (UTC)<br />
::: I see. I mistook that this was about votes. Popularity score is a different thing. My mistake. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:22, 2 November 2019 (UTC)<br />
<br />
== Improve Comment syntax section ==<br />
<br />
It would be best to give examples of comments syntax right there. Currently there are 6 links in that small section, which would take a lot of time from users and may also be misleading.<br />
<br />
"Note this implementation has some occasional differences " - would be used much less often than how to just simply markup some code.<br />
I suggest main information should go first, and examples would be good.<br />
<br />
It would be good if comment syntax was given directly on AUR site, but at least here one should be able to very easily navigate to basic comment syntax.<br />
[[User:Ynikitenko|Ynikitenko]] ([[User talk:Ynikitenko|talk]]) 15:52, 17 November 2019 (UTC)<br />
:In particular those differences should be documented, such as aur-specific features. For example, it is noted that references to git commit hashes will be linkified, but not that this means specifically ''12-digit'' hash references ([https://aur.archlinux.org/packages/ido-ubuntu/#comment-710922 example]). No idea what the specific format expected for Flyspray tickets would be. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 06:52, 19 November 2019 (UTC)<br />
<br />
== Add context about package ecosystem ==<br />
I recently had the content below [https://wiki.archlinux.org/index.php?title=Arch_User_Repository&type=revision&diff=694215&oldid=694213 reverted].<br />
<br />
The justification was: "this is irrelevant to the technical description of the AUR"<br />
<br />
I see nothing in [[Help:Style]], nor do I believe it would be advisable to add, anything limiting the content of articles to being strictly technical. People go to pages in the Arch wiki wanting to learn about that topic, as it relates to Arch. They want technical content and context. What I've included is an extremely important detail for someone evaluating the merits of the Arch package system! Note that I just went around the last couple of hours evaluating other Linux distros for the wiki and thoroughly appreciate when projects are smart about including these sorts of details. That's specifically why I thought to include this now.<br />
[[User:Einr|Einr]] ([[User talk:Einr|talk]]) 07:01, 5 September 2021 (UTC)<br />
<br />
:This is not an ad board. This page describes what the AUR is and how to use it, your sentence does not help with any of these. Furthermore, your note is misleading because it indicates a combination with official repositories, which are managed independently and contain incomparably higher quality packages. The repository statistics don't consider package quality at all, in fact AUR is one of the few (if not the only one) community-driven and ''unsupported'' repositories in the top 10. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 07:39, 5 September 2021 (UTC)<br />
<br />
::Regarding, "May mislead to believe combination with official." Happy to rephrase it to make it explicit in the statement that's not the case if that suits you. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Not an ad board." The open source world is built on open sharing. What I posted advertises nothing and highlights only that Arch, collectively, has a broader package coverage net than nearly anything else out there almost no matter how it's measured. People looking for '''''a''''' package know what they want and only want to know that it's available. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Contrast to incomparably higher quality packages" The contrast in quality and implied security between official and AUR seems pretty obvious and well stated already. Also happy to make that more explicit in the statement if you believe it needs further re-iterating. I think it's covered well enough in these pages, but I'm ok with that as a compromise. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
<br />
=== Reverted content ===<br />
Combined with the [https://repology.org/repository/arch official repositories], Arch is proud to be home to one of the [https://repology.org/repository/aur largest and most up-to-date] package repository ecosystems [https://entvibes.com/archwiki/Arch%20and%20AUR%20on%20Repology%20-%20highlighted%20-%202021-09-05%20162239.png in the Linux world].<br />
<br />
== Tips and Tricks ==<br />
<br />
Does this page really need a "Tips and Tricks" section? It seems a bit out-of-scope to me, especially the particular tip it was created for ([[Special:Diff/706567]]).<br />
<br />
This might fit more appropriately as a [[Template:Tip]] appended to [[Arch User Repository#Build the package]]. It also needs some proofreading. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:28, 14 January 2022 (UTC)<br />
<br />
==== Proposal ====<br />
{{Tip|Use {{ic| git clean -dfX}} in the build directory to delete all untracked files, such as previously built package files.}}<br />
<br />
<br />
:I like the idea of moving it. Having it as a tip directly after the description of {{ic|--clean}} makes sense. [[User:Progandy|Progandy]] ([[User talk:Progandy|talk]]) 14:43, 16 January 2022 (UTC)<br />
<br />
:: As the writer of this, I am not against moving it.<br />
:: Still if more tips and tricks would occur, I think a section for this might be useful. But until then it could be removed.<br />
:: [[User:G3ro|G3ro]] ([[User talk:G3ro|talk]]) 16:56, 16 January 2022 (UTC)<br />
<br />
::: If more general tips and tricks related to the AUR come up, that don't fit in any existing section, there may be use for it in the future. This one fits well after the explanation of building a package. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 18:56, 16 January 2022 (UTC)<br />
<br />
== Using extra-x86_64-build from devtools ==<br />
<br />
@Scimmia,<br />
<br />
I'm not able to confirm, or understand, your summary on reverting my addition of a note, namely, "Covered in previous paragraph." There is no mention of devtools or extra-x86_64-build anywhere in this wiki page(?). I guess I'm being thick as usual.... TIA :)<br />
<br />
Backstory: For several years I have used my own custom bash script to test build my AUR pkgs in a clean chroot. Recently in the forums Slithery and graysky pointed me to two existing tools, the first of which is from devtools. I found that this wasn't mentioned anywhere in the AUR wiki page or DeveloperWiki Building in a clean chroot page.<br />
<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 15:34, 29 November 2022 (UTC)<br />
<br />
:As I said, it's covered by the previous paragraph. Click the link. [[User:Scimmia|Scimmia]] ([[User talk:Scimmia|talk]]) 15:37, 29 November 2022 (UTC)<br />
<br />
::I don't think I'm wrong in saying that for AUR maintainers the DeveloperWiki Building in a clean chroot page is unrevealing as to the usefulness of extra-x86_64-build for their use case. That's what led me astray for about 3 years. There is a technically correct (the best kind) way to build in a clean chroot, but there is no mention of its usefulness for AUR pkgs.<br />
<br />
::I have posted on the corresponding Talk page asking for a short note so this is rather obvious for AUR maints. I hope that makes some sense. Thanks :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 16:36, 29 November 2022 (UTC)<br />
<br />
:::I don't see the issue here - [[DeveloperWiki:Building in a clean chroot]] mentions {{ic|extra-x86_64-build}} in the second paragraph. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 13:11, 30 November 2022 (UTC)<br />
<br />
::::@Alad -- Perhaps it's just me being dense then?... The table shows {{ic|extra-x86_64-build}} as having target repositories extra/community. I had no idea it worked for AUR packages, and when reading this misunderstood that it only worked for packages that live in the repositories. It would have helped me if on the AUR wiki page it mentioned that it also works for any pkg, including those in AUR. I hope that explains a little better. Thanks again for these replies :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 13:37, 30 November 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=DeveloperWiki_talk:Building_in_a_clean_chroot&diff=758489DeveloperWiki talk:Building in a clean chroot2022-11-30T13:31:02Z<p>Cmsigler: Reply</p>
<hr />
<div>== Deleting a chroot ==<br />
<br />
It's not written in the page so I'll write it here: Just delete the $CHROOT folder (Unless it's btrfs). [[User:Tharbad|Tharbad]] ([[User talk:Tharbad|talk]]) 03:05, 12 May 2019 (UTC)<br />
<br />
== More info needed RE: archbuild ==<br />
<br />
With the semi-recent changes to chroot building and the addition of the '''archbuild''' convenience script, using a custom repo within your build chroot is no longer supported. It is therefore required to create (or symlink) a pacman.conf to <code>/usr/share/devtools/pacman-<some_name>.conf</code>, and then run <code><some_name>-x86_64-build</code> to build packages in a chroot that will have access to your custom repo.<br />
<br />
For more background, see [https://old.reddit.com/r/archlinux/comments/e70fjf/makechrootpkg_no_longer_seems_to_support_local/ this reddit post] and [https://old.reddit.com/r/archlinux/comments/e70fjf/makechrootpkg_no_longer_seems_to_support_local/fa10s82/ this response thread].<br />
<br />
[[User:Terminalmage|Terminalmage]] ([[User talk:Terminalmage|talk]]) 01:39, 10 December 2019 (UTC)<br />
<br />
== <s>Convenience way for AUR packages</s> ==<br />
<br />
The section “Convenience way” does not mention building AUR packages. I guess extra-x86_64-build would work for these, too, or am I wrong? [[User:Buzo|Buzo]] ([[User talk:Buzo|talk]]) 17:39, 12 May 2020 (UTC)<br />
<br />
:It works on anything with a {{ic|PKGBUILD}}. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 14:42, 22 February 2022 (UTC)<br />
<br />
== <s>"Setting up a chroot" in "Classic way" is missing the base package/group</s> ==<br />
<br />
$ mkarchroot $CHROOT/root base-devel<br />
should be<br />
$ mkarchroot $CHROOT/root base base-devel<br />
<br />
:No, it actually should not. Containers have no requirements to install {{ic|base}}. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 14:42, 22 February 2022 (UTC)<br />
<br />
:: Wouldn't that mean PKGBUILDs need packages in base as dependencies? Because otherwise they fail to build in the chroot, right? [[User:Rac27|Rac27]] ([[User talk:Rac27|talk]]) 14:54, 22 February 2022 (UTC)<br />
<br />
::: Right, unless those dependencies are pulled in by other packages. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 09:44, 23 February 2022 (UTC)<br />
<br />
== Note about BTRFS subvolumes ==<br />
<br />
There is a note attached underneath the first `mkarchchroot` command that says:<br />
<br />
On btrfs, the chroot is created as a subvolume, so you have to remove it by removing the subvolume by running btrfs subvolume delete $CHROOT/root as root.<br />
<br />
According to this: https://wiki.archlinux.org/title/Btrfs#Deleting_a_subvolume , BTRFS subvolumes can just be removed normally with `rmdir` or `rm`. Should this note be removed?<br />
<br />
{{Unsigned|2022-11-21T13:29:06|Saltedcoffii}}<br />
<br />
== <s>AUR pkg clean chroot test building</s> ==<br />
<br />
As I am not able to edit this DeveloperWiki page, I would like to recommend adding a short Note under section "Convenience way", something like:<br />
<br />
{{Note|AUR maintainers can test building of an AUR pkg in a clean chroot using the {{ic|extra-x86_64-build}} build script.}}<br />
<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 14:06, 29 November 2022 (UTC)<br />
<br />
:AUR specific notes don't belong to this page. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 20:16, 29 November 2022 (UTC)<br />
<br />
::@Lahwaacz -- Hi, and thank you for your reply. I added a note about using extra-x86_64-build for AUR pkgs on the Arch User Repository wiki page. However, it was deleted by maintainer Scimmia because there was already a link to this DeveloperWiki page.<br />
<br />
::Is there some way AUR maintainers can learn that extra-x86_64-build is not just for pkgs in extra and community, but is also good to go for test building AUR pkgs in a clean chroot? Is there a proper place for such a note to go on <b>any</b> AUR-related wiki page? Thank you again for your help :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 13:31, 30 November 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Talk:Arch_User_Repository&diff=758465Talk:Arch User Repository2022-11-29T16:36:20Z<p>Cmsigler: Reply</p>
<hr />
<div>== contribute to existing package ==<br />
what is the best way to contribute to an existing AUR package? i cloned one and tried to push but it gave me a permission error --[[User:Soloturn|Soloturn]] ([[User talk:Soloturn|talk]]) 16:04, 28 January 2019 (UTC)<br />
<br />
:Users are not allowed to modify something owned by another user. It's no different from cloning a Github repository and trying to push to that. The equivalent of submitting an issue would be leaving a comment with a patch file. The AUR platform in particular allows collaboration features -- you may request that a maintainer grant you push access by adding your name as a co-maintainer. If the package is broken or out of date, see [[Arch User Repository#Foo in the AUR is outdated; what should I do?]]<br />
<br />
:This is possibly something that we should make clear in a FAQ entry. -- [[User:Eschwartz|Eschwartz]] ([[User talk:Eschwartz|talk]]) 19:49, 28 January 2019 (UTC)<br />
<br />
::I was thinking about this while writing a [[Talk:Arch User Repository#Proposal: Other requests|proposal regarding "Other requests"]]. It is possible to request a package be disowned with "Orphan"; why not add "Co-maintain" to send a request to ask for permission to assist with a package's maintenance? Of course, it would not be unnecessary to send that request to the mailing list, and there's always the AUR comments or the forums for users to contact a maintainer otherwise; but having the feature built in to the AUR would allow us to add a fourth subsection here to recommend ground rules and possibly expedite the process of adding co-maintainers when packagers are interested in doing so. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 14:45, 6 February 2019 (UTC)<br />
<br />
:::Rather than an FAQ, maybe add a bullet point under "Maintaining packages". Question: Who has the right to use "Manage Co-Maintainers"? [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:07, 27 May 2019 (UTC)<br />
<br />
::::<s>Closing proposal below, now implemented</s>. Leaving discussion open: in the future, we may want to break long bulleted lists like "Rules of Submission" and "Maintaining Packages" into subsections. This would make it more convenient to link to specific points in the list, which in turn would be convenient if we still want an FAQ such as "How can I contribute to an existing package?" (which should link to adopting orphaned packages, commenting on a package, and adding co-maintainters) [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 09:31, 4 June 2019 (UTC)<br />
<br />
:::::Since the reversion, documenting how to add co-maintainers has been absorbed into the proposal for [[User talk:Quequotion/AUR submission guidelines#Maintaining packages|AUR submission guidelines]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 12:07, 21 July 2019 (UTC)<br />
<br />
=== Proposal: How can I contribute to an existing package? ===<br />
{{Comment|No longer clear where this question would fit--splitting the content of the page between a "maintainter-oriented" page and a "user-oriented" page overlooks the fact that AUR package maintainers and AUR users ''may be the same people''.}}<br />
If the package is [[User:Quequotion/AUR submission guidelines#Orphan|orphaned]] you may [[User:Quequotion/AUR submission guidelines#Maintaining packages|adopt it]], otherwise you may post your idea [[User:Quequotion/Arch User Repository#Commenting on packages|in its comments]] or ask to be [[User:Quequotion/AUR submission guidelines#Maintaining packages|appointed as a co-maintainer]].<br />
<br />
== Integrate FAQ content ==<br />
<br />
Truncate FAQs' answers as much as possible, linking to an appropriate page or (proposed) section of the AUR page. Note that some content must be transferred to the [[AUR submission guidelines]].<br />
<br />
If you'd like to discuss the proposal as a whole, do so in this header; use [[Template:Comment|comments]] within individual subsections to discuss them. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:42, 11 June 2019 (UTC)<br />
<br />
If you'd like to see how this page should look, and get a history without other changes, I've restored its [[User:Quequotion/Arch User Repository|full page draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 10:14, 11 June 2019 (UTC)<br />
<br />
:There are a lot of changes to review; so I've compiled a rundown of them [[User talk:Quequotion/Arch User Repository#Breakdown_of_changes|on the talk page of the draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:45, 14 June 2019 (UTC)<br />
<br />
:Please keep drafts on a dedicated page. ([[Special:Diff/575147]]) Closing the sections below. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 13:18, 11 June 2019 (UTC)<br />
<br />
:Update on these proposals: The FAQ integration for [[Arch User Repository]] is more or less completed, however some of the FAQ on this page contain maintainer-oriented content that is better suited to subsections of [[AUR submission guidelines]]. I am aware that proposals for a page should be discussed only on that page's talk page. These proposals date back to when these two pages existed as one, and therefore concern both: several of the FAQ from ''this'' page need to be integrated into subsections of ''that'' page. To ease the review process, I compiled a [[User talk:Quequotion/AUR submission guidelines#Breakdown of changes|breakdown of the proposed changes]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 17:13, 5 January 2021 (UTC)<br />
<br />
== Split FAQ content to Arch User Repository/FAQ page. ==<br />
<br />
Have a look at the ratio of [https://i.postimg.cc/MHHcW5b3/aur-slashfaq.png FAQ to page content].<br />
<br />
I like the the idea of using ''Article''/FAQ for these. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 01:42, 18 June 2019 (UTC)<br />
<br />
: This makes sense to me. [[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 02:16, 18 June 2019 (UTC)<br />
<br />
:: An alternative which doesn't require a new page is merging this to [[FAQ]]. An issue with this approach (presented on IRC) however is that adding AUR content to the "official" FAQ may add some notion of supported-ness for the AUR (and its content in specific). A way around this would be to include the "AUR packages are user produced content. Any use of the provided files is at your own risk." warning as well. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:54, 24 June 2019 (UTC)<br />
<br />
::: This would also add a significant amount of content to the official FAQ page, which might be seen as clutter. However, to be honest I'm more interested in ''having this FAQ relocated'' than ''where it ultimately goes''. Also, not sure if I need to clarify, but this is not exclusive of the [[#Integrate FAQ content]] proposals; it would be in the best interest of wherever the FAQ ends up that it is as small as possible when it gets there. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:10, 21 July 2019 (UTC)<br />
<br />
== Meaning of the Popularity score? ==<br />
<br />
Can someone explain what the meaning of the Popularity score is, and how it's calculated? And maybe add that to the wiki?<br />
It doesn't seem to be derived from the number of votes, as some packages with more votes has a lower popularity than others with a lower vote count.<br />
Maybe it's number of installs? Maybe it's time dependent, so recent votes only temporarily increase popularity?<br />
<br />
I got curious about this as a helper like yay prominently displays this value, but I haven't seen it presented in yay's documentation, or here. Or maybe I skimmed them too fast.<br />
<br />
[[User:Biowaste|Biowaste]] ([[User talk:Biowaste|talk]]) 00:02, 2 November 2019 (UTC)<br />
<br />
:<s>This is one of many issues my proposal (Integrate FAQ content) for this page and the AUR submission guidelines page handles. If you dig around on the current page, you may find what you are looking for--or if someone could approve the changes we could have the information appropriately documented under [[User:Quequotion/Arch User Repository#Feedback|an improved feedback section]] here, and referenced in [[User:Quequotion/AUR submission guidelines#Promoting packages to the community repository|a section about promoting packages to community]] on the AUR submission guidelines page. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:23, 2 November 2019 (UTC)</s><br />
<br />
::Your proposal does not answer the question "What is the meaning of the Popularity score?" at all, so please stop pretending that it is a universal solution for every issue related to AUR documentation.<br />
::On the [https://aur.archlinux.org/packages/ AUR package list] page, the Popularity column is suffixed with a "?" symbol which has an HTML tooltip explaining how the values are calculated.<br />
::-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 10:18, 2 November 2019 (UTC)<br />
::: I see. I mistook that this was about votes. Popularity score is a different thing. My mistake. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:22, 2 November 2019 (UTC)<br />
<br />
== Improve Comment syntax section ==<br />
<br />
It would be best to give examples of comments syntax right there. Currently there are 6 links in that small section, which would take a lot of time from users and may also be misleading.<br />
<br />
"Note this implementation has some occasional differences " - would be used much less often than how to just simply markup some code.<br />
I suggest main information should go first, and examples would be good.<br />
<br />
It would be good if comment syntax was given directly on AUR site, but at least here one should be able to very easily navigate to basic comment syntax.<br />
[[User:Ynikitenko|Ynikitenko]] ([[User talk:Ynikitenko|talk]]) 15:52, 17 November 2019 (UTC)<br />
:In particular those differences should be documented, such as aur-specific features. For example, it is noted that references to git commit hashes will be linkified, but not that this means specifically ''12-digit'' hash references ([https://aur.archlinux.org/packages/ido-ubuntu/#comment-710922 example]). No idea what the specific format expected for Flyspray tickets would be. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 06:52, 19 November 2019 (UTC)<br />
<br />
== Add context about package ecosystem ==<br />
I recently had the content below [https://wiki.archlinux.org/index.php?title=Arch_User_Repository&type=revision&diff=694215&oldid=694213 reverted].<br />
<br />
The justification was: "this is irrelevant to the technical description of the AUR"<br />
<br />
I see nothing in [[Help:Style]], nor do I believe it would be advisable to add, anything limiting the content of articles to being strictly technical. People go to pages in the Arch wiki wanting to learn about that topic, as it relates to Arch. They want technical content and context. What I've included is an extremely important detail for someone evaluating the merits of the Arch package system! Note that I just went around the last couple of hours evaluating other Linux distros for the wiki and thoroughly appreciate when projects are smart about including these sorts of details. That's specifically why I thought to include this now.<br />
[[User:Einr|Einr]] ([[User talk:Einr|talk]]) 07:01, 5 September 2021 (UTC)<br />
<br />
:This is not an ad board. This page describes what the AUR is and how to use it, your sentence does not help with any of these. Furthermore, your note is misleading because it indicates a combination with official repositories, which are managed independently and contain incomparably higher quality packages. The repository statistics don't consider package quality at all, in fact AUR is one of the few (if not the only one) community-driven and ''unsupported'' repositories in the top 10. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 07:39, 5 September 2021 (UTC)<br />
<br />
::Regarding, "May mislead to believe combination with official." Happy to rephrase it to make it explicit in the statement that's not the case if that suits you. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Not an ad board." The open source world is built on open sharing. What I posted advertises nothing and highlights only that Arch, collectively, has a broader package coverage net than nearly anything else out there almost no matter how it's measured. People looking for '''''a''''' package know what they want and only want to know that it's available. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Contrast to incomparably higher quality packages" The contrast in quality and implied security between official and AUR seems pretty obvious and well stated already. Also happy to make that more explicit in the statement if you believe it needs further re-iterating. I think it's covered well enough in these pages, but I'm ok with that as a compromise. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
<br />
=== Reverted content ===<br />
Combined with the [https://repology.org/repository/arch official repositories], Arch is proud to be home to one of the [https://repology.org/repository/aur largest and most up-to-date] package repository ecosystems [https://entvibes.com/archwiki/Arch%20and%20AUR%20on%20Repology%20-%20highlighted%20-%202021-09-05%20162239.png in the Linux world].<br />
<br />
== Tips and Tricks ==<br />
<br />
Does this page really need a "Tips and Tricks" section? It seems a bit out-of-scope to me, especially the particular tip it was created for ([[Special:Diff/706567]]).<br />
<br />
This might fit more appropriately as a [[Template:Tip]] appended to [[Arch User Repository#Build the package]]. It also needs some proofreading. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:28, 14 January 2022 (UTC)<br />
<br />
==== Proposal ====<br />
{{Tip|Use {{ic| git clean -dfX}} in the build directory to delete all untracked files, such as previously built package files.}}<br />
<br />
<br />
:I like the idea of moving it. Having it as a tip directly after the description of {{ic|--clean}} makes sense. [[User:Progandy|Progandy]] ([[User talk:Progandy|talk]]) 14:43, 16 January 2022 (UTC)<br />
<br />
:: As the writer of this, I am not against moving it.<br />
:: Still if more tips and tricks would occur, I think a section for this might be useful. But until then it could be removed.<br />
:: [[User:G3ro|G3ro]] ([[User talk:G3ro|talk]]) 16:56, 16 January 2022 (UTC)<br />
<br />
::: If more general tips and tricks related to the AUR come up, that don't fit in any existing section, there may be use for it in the future. This one fits well after the explanation of building a package. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 18:56, 16 January 2022 (UTC)<br />
<br />
== Using extra-x86_64-build from devtools ==<br />
<br />
@Scimmia,<br />
<br />
I'm not able to confirm, or understand, your summary on reverting my addition of a note, namely, "Covered in previous paragraph." There is no mention of devtools or extra-x86_64-build anywhere in this wiki page(?). I guess I'm being thick as usual.... TIA :)<br />
<br />
Backstory: For several years I have used my own custom bash script to test build my AUR pkgs in a clean chroot. Recently in the forums Slithery and graysky pointed me to two existing tools, the first of which is from devtools. I found that this wasn't mentioned anywhere in the AUR wiki page or DeveloperWiki Building in a clean chroot page.<br />
<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 15:34, 29 November 2022 (UTC)<br />
<br />
:As I said, it's covered by the previous paragraph. Click the link. [[User:Scimmia|Scimmia]] ([[User talk:Scimmia|talk]]) 15:37, 29 November 2022 (UTC)<br />
<br />
::I don't think I'm wrong in saying that for AUR maintainers the DeveloperWiki Building in a clean chroot page is unrevealing as to the usefulness of extra-x86_64-build for their use case. That's what led me astray for about 3 years. There is a technically correct (the best kind) way to build in a clean chroot, but there is no mention of its usefulness for AUR pkgs.<br />
<br />
::I have posted on the corresponding Talk page asking for a short note so this is rather obvious for AUR maints. I hope that makes some sense. Thanks :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 16:36, 29 November 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Talk:Arch_User_Repository&diff=758461Talk:Arch User Repository2022-11-29T15:34:46Z<p>Cmsigler: Question about citing devtools, extra-x86_64-build script</p>
<hr />
<div>== contribute to existing package ==<br />
what is the best way to contribute to an existing AUR package? i cloned one and tried to push but it gave me a permission error --[[User:Soloturn|Soloturn]] ([[User talk:Soloturn|talk]]) 16:04, 28 January 2019 (UTC)<br />
<br />
:Users are not allowed to modify something owned by another user. It's no different from cloning a Github repository and trying to push to that. The equivalent of submitting an issue would be leaving a comment with a patch file. The AUR platform in particular allows collaboration features -- you may request that a maintainer grant you push access by adding your name as a co-maintainer. If the package is broken or out of date, see [[Arch User Repository#Foo in the AUR is outdated; what should I do?]]<br />
<br />
:This is possibly something that we should make clear in a FAQ entry. -- [[User:Eschwartz|Eschwartz]] ([[User talk:Eschwartz|talk]]) 19:49, 28 January 2019 (UTC)<br />
<br />
::I was thinking about this while writing a [[Talk:Arch User Repository#Proposal: Other requests|proposal regarding "Other requests"]]. It is possible to request a package be disowned with "Orphan"; why not add "Co-maintain" to send a request to ask for permission to assist with a package's maintenance? Of course, it would not be unnecessary to send that request to the mailing list, and there's always the AUR comments or the forums for users to contact a maintainer otherwise; but having the feature built in to the AUR would allow us to add a fourth subsection here to recommend ground rules and possibly expedite the process of adding co-maintainers when packagers are interested in doing so. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 14:45, 6 February 2019 (UTC)<br />
<br />
:::Rather than an FAQ, maybe add a bullet point under "Maintaining packages". Question: Who has the right to use "Manage Co-Maintainers"? [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:07, 27 May 2019 (UTC)<br />
<br />
::::<s>Closing proposal below, now implemented</s>. Leaving discussion open: in the future, we may want to break long bulleted lists like "Rules of Submission" and "Maintaining Packages" into subsections. This would make it more convenient to link to specific points in the list, which in turn would be convenient if we still want an FAQ such as "How can I contribute to an existing package?" (which should link to adopting orphaned packages, commenting on a package, and adding co-maintainters) [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 09:31, 4 June 2019 (UTC)<br />
<br />
:::::Since the reversion, documenting how to add co-maintainers has been absorbed into the proposal for [[User talk:Quequotion/AUR submission guidelines#Maintaining packages|AUR submission guidelines]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 12:07, 21 July 2019 (UTC)<br />
<br />
=== Proposal: How can I contribute to an existing package? ===<br />
{{Comment|No longer clear where this question would fit--splitting the content of the page between a "maintainter-oriented" page and a "user-oriented" page overlooks the fact that AUR package maintainers and AUR users ''may be the same people''.}}<br />
If the package is [[User:Quequotion/AUR submission guidelines#Orphan|orphaned]] you may [[User:Quequotion/AUR submission guidelines#Maintaining packages|adopt it]], otherwise you may post your idea [[User:Quequotion/Arch User Repository#Commenting on packages|in its comments]] or ask to be [[User:Quequotion/AUR submission guidelines#Maintaining packages|appointed as a co-maintainer]].<br />
<br />
== Integrate FAQ content ==<br />
<br />
Truncate FAQs' answers as much as possible, linking to an appropriate page or (proposed) section of the AUR page. Note that some content must be transferred to the [[AUR submission guidelines]].<br />
<br />
If you'd like to discuss the proposal as a whole, do so in this header; use [[Template:Comment|comments]] within individual subsections to discuss them. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:42, 11 June 2019 (UTC)<br />
<br />
If you'd like to see how this page should look, and get a history without other changes, I've restored its [[User:Quequotion/Arch User Repository|full page draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 10:14, 11 June 2019 (UTC)<br />
<br />
:There are a lot of changes to review; so I've compiled a rundown of them [[User talk:Quequotion/Arch User Repository#Breakdown_of_changes|on the talk page of the draft]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:45, 14 June 2019 (UTC)<br />
<br />
:Please keep drafts on a dedicated page. ([[Special:Diff/575147]]) Closing the sections below. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 13:18, 11 June 2019 (UTC)<br />
<br />
:Update on these proposals: The FAQ integration for [[Arch User Repository]] is more or less completed, however some of the FAQ on this page contain maintainer-oriented content that is better suited to subsections of [[AUR submission guidelines]]. I am aware that proposals for a page should be discussed only on that page's talk page. These proposals date back to when these two pages existed as one, and therefore concern both: several of the FAQ from ''this'' page need to be integrated into subsections of ''that'' page. To ease the review process, I compiled a [[User talk:Quequotion/AUR submission guidelines#Breakdown of changes|breakdown of the proposed changes]]. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 17:13, 5 January 2021 (UTC)<br />
<br />
== Split FAQ content to Arch User Repository/FAQ page. ==<br />
<br />
Have a look at the ratio of [https://i.postimg.cc/MHHcW5b3/aur-slashfaq.png FAQ to page content].<br />
<br />
I like the the idea of using ''Article''/FAQ for these. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 01:42, 18 June 2019 (UTC)<br />
<br />
: This makes sense to me. [[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 02:16, 18 June 2019 (UTC)<br />
<br />
:: An alternative which doesn't require a new page is merging this to [[FAQ]]. An issue with this approach (presented on IRC) however is that adding AUR content to the "official" FAQ may add some notion of supported-ness for the AUR (and its content in specific). A way around this would be to include the "AUR packages are user produced content. Any use of the provided files is at your own risk." warning as well. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:54, 24 June 2019 (UTC)<br />
<br />
::: This would also add a significant amount of content to the official FAQ page, which might be seen as clutter. However, to be honest I'm more interested in ''having this FAQ relocated'' than ''where it ultimately goes''. Also, not sure if I need to clarify, but this is not exclusive of the [[#Integrate FAQ content]] proposals; it would be in the best interest of wherever the FAQ ends up that it is as small as possible when it gets there. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:10, 21 July 2019 (UTC)<br />
<br />
== Meaning of the Popularity score? ==<br />
<br />
Can someone explain what the meaning of the Popularity score is, and how it's calculated? And maybe add that to the wiki?<br />
It doesn't seem to be derived from the number of votes, as some packages with more votes has a lower popularity than others with a lower vote count.<br />
Maybe it's number of installs? Maybe it's time dependent, so recent votes only temporarily increase popularity?<br />
<br />
I got curious about this as a helper like yay prominently displays this value, but I haven't seen it presented in yay's documentation, or here. Or maybe I skimmed them too fast.<br />
<br />
[[User:Biowaste|Biowaste]] ([[User talk:Biowaste|talk]]) 00:02, 2 November 2019 (UTC)<br />
<br />
:<s>This is one of many issues my proposal (Integrate FAQ content) for this page and the AUR submission guidelines page handles. If you dig around on the current page, you may find what you are looking for--or if someone could approve the changes we could have the information appropriately documented under [[User:Quequotion/Arch User Repository#Feedback|an improved feedback section]] here, and referenced in [[User:Quequotion/AUR submission guidelines#Promoting packages to the community repository|a section about promoting packages to community]] on the AUR submission guidelines page. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 04:23, 2 November 2019 (UTC)</s><br />
<br />
::Your proposal does not answer the question "What is the meaning of the Popularity score?" at all, so please stop pretending that it is a universal solution for every issue related to AUR documentation.<br />
::On the [https://aur.archlinux.org/packages/ AUR package list] page, the Popularity column is suffixed with a "?" symbol which has an HTML tooltip explaining how the values are calculated.<br />
::-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 10:18, 2 November 2019 (UTC)<br />
::: I see. I mistook that this was about votes. Popularity score is a different thing. My mistake. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 15:22, 2 November 2019 (UTC)<br />
<br />
== Improve Comment syntax section ==<br />
<br />
It would be best to give examples of comments syntax right there. Currently there are 6 links in that small section, which would take a lot of time from users and may also be misleading.<br />
<br />
"Note this implementation has some occasional differences " - would be used much less often than how to just simply markup some code.<br />
I suggest main information should go first, and examples would be good.<br />
<br />
It would be good if comment syntax was given directly on AUR site, but at least here one should be able to very easily navigate to basic comment syntax.<br />
[[User:Ynikitenko|Ynikitenko]] ([[User talk:Ynikitenko|talk]]) 15:52, 17 November 2019 (UTC)<br />
:In particular those differences should be documented, such as aur-specific features. For example, it is noted that references to git commit hashes will be linkified, but not that this means specifically ''12-digit'' hash references ([https://aur.archlinux.org/packages/ido-ubuntu/#comment-710922 example]). No idea what the specific format expected for Flyspray tickets would be. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 06:52, 19 November 2019 (UTC)<br />
<br />
== Add context about package ecosystem ==<br />
I recently had the content below [https://wiki.archlinux.org/index.php?title=Arch_User_Repository&type=revision&diff=694215&oldid=694213 reverted].<br />
<br />
The justification was: "this is irrelevant to the technical description of the AUR"<br />
<br />
I see nothing in [[Help:Style]], nor do I believe it would be advisable to add, anything limiting the content of articles to being strictly technical. People go to pages in the Arch wiki wanting to learn about that topic, as it relates to Arch. They want technical content and context. What I've included is an extremely important detail for someone evaluating the merits of the Arch package system! Note that I just went around the last couple of hours evaluating other Linux distros for the wiki and thoroughly appreciate when projects are smart about including these sorts of details. That's specifically why I thought to include this now.<br />
[[User:Einr|Einr]] ([[User talk:Einr|talk]]) 07:01, 5 September 2021 (UTC)<br />
<br />
:This is not an ad board. This page describes what the AUR is and how to use it, your sentence does not help with any of these. Furthermore, your note is misleading because it indicates a combination with official repositories, which are managed independently and contain incomparably higher quality packages. The repository statistics don't consider package quality at all, in fact AUR is one of the few (if not the only one) community-driven and ''unsupported'' repositories in the top 10. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 07:39, 5 September 2021 (UTC)<br />
<br />
::Regarding, "May mislead to believe combination with official." Happy to rephrase it to make it explicit in the statement that's not the case if that suits you. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Not an ad board." The open source world is built on open sharing. What I posted advertises nothing and highlights only that Arch, collectively, has a broader package coverage net than nearly anything else out there almost no matter how it's measured. People looking for '''''a''''' package know what they want and only want to know that it's available. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
::Regarding, "Contrast to incomparably higher quality packages" The contrast in quality and implied security between official and AUR seems pretty obvious and well stated already. Also happy to make that more explicit in the statement if you believe it needs further re-iterating. I think it's covered well enough in these pages, but I'm ok with that as a compromise. [[User:Einr|Einr]] ([[User talk:Einr|talk]]) 08:07, 5 September 2021 (UTC)<br />
<br />
=== Reverted content ===<br />
Combined with the [https://repology.org/repository/arch official repositories], Arch is proud to be home to one of the [https://repology.org/repository/aur largest and most up-to-date] package repository ecosystems [https://entvibes.com/archwiki/Arch%20and%20AUR%20on%20Repology%20-%20highlighted%20-%202021-09-05%20162239.png in the Linux world].<br />
<br />
== Tips and Tricks ==<br />
<br />
Does this page really need a "Tips and Tricks" section? It seems a bit out-of-scope to me, especially the particular tip it was created for ([[Special:Diff/706567]]).<br />
<br />
This might fit more appropriately as a [[Template:Tip]] appended to [[Arch User Repository#Build the package]]. It also needs some proofreading. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 13:28, 14 January 2022 (UTC)<br />
<br />
==== Proposal ====<br />
{{Tip|Use {{ic| git clean -dfX}} in the build directory to delete all untracked files, such as previously built package files.}}<br />
<br />
<br />
:I like the idea of moving it. Having it as a tip directly after the description of {{ic|--clean}} makes sense. [[User:Progandy|Progandy]] ([[User talk:Progandy|talk]]) 14:43, 16 January 2022 (UTC)<br />
<br />
:: As the writer of this, I am not against moving it.<br />
:: Still if more tips and tricks would occur, I think a section for this might be useful. But until then it could be removed.<br />
:: [[User:G3ro|G3ro]] ([[User talk:G3ro|talk]]) 16:56, 16 January 2022 (UTC)<br />
<br />
::: If more general tips and tricks related to the AUR come up, that don't fit in any existing section, there may be use for it in the future. This one fits well after the explanation of building a package. [[User:Quequotion|quequotion]] ([[User talk:Quequotion|talk]]) 18:56, 16 January 2022 (UTC)<br />
<br />
== Using extra-x86_64-build from devtools ==<br />
<br />
@Scimmia,<br />
<br />
I'm not able to confirm, or understand, your summary on reverting my addition of a note, namely, "Covered in previous paragraph." There is no mention of devtools or extra-x86_64-build anywhere in this wiki page(?). I guess I'm being thick as usual.... TIA :)<br />
<br />
Backstory: For several years I have used my own custom bash script to test build my AUR pkgs in a clean chroot. Recently in the forums Slithery and graysky pointed me to two existing tools, the first of which is from devtools. I found that this wasn't mentioned anywhere in the AUR wiki page or DeveloperWiki Building in a clean chroot page.<br />
<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 15:34, 29 November 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Arch_User_Repository&diff=758454Arch User Repository2022-11-29T14:13:43Z<p>Cmsigler: Add note about building in clean chroot using extra-x86_64-build</p>
<hr />
<div>[[Category:About Arch]]<br />
[[Category:Package development]]<br />
[[Category:Package management]]<br />
[[de:Arch User Repository]]<br />
[[es:Arch User Repository]]<br />
[[fr:Arch User Repository]]<br />
[[ja:Arch User Repository]]<br />
[[pt:Arch User Repository]]<br />
[[ru:Arch User Repository]]<br />
[[zh-hans:Arch User Repository]]<br />
{{Related articles start}}<br />
{{Related|makepkg}}<br />
{{Related|pacman}}<br />
{{Related|PKGBUILD}}<br />
{{Related|.SRCINFO}}<br />
{{Related|Aurweb RPC interface}}<br />
{{Related|AUR submission guidelines}}<br />
{{Related|AUR Trusted User Guidelines}}<br />
{{Related|Official repositories}}<br />
{{Related|Arch Build System}}<br />
{{Related|Creating packages}}<br />
{{Related|AUR helpers}}<br />
{{Related|AUR Cleanup Day}}<br />
{{Related articles end}}<br />
<br />
The Arch User Repository (AUR) is a community-driven repository for Arch users. It contains package descriptions ([[PKGBUILD]]s) that allow you to compile a package from source with [[makepkg]] and then install it via [[pacman#Additional commands|pacman]]. The AUR was created to organize and share new packages from the community and to help expedite popular packages' inclusion into the [[community repository]]. This document explains how users can access and utilize the AUR.<br />
<br />
A good number of new packages that enter the official repositories start in the AUR. In the AUR, users are able to contribute their own package builds ({{ic|PKGBUILD}} and related files). The AUR community has the ability to vote for packages in the AUR. If a package becomes popular enough — provided it has a compatible license and good packaging technique — it may be entered into the ''community'' repository (directly accessible by ''pacman'' or [[abs]]).<br />
<br />
{{Warning|AUR packages are user-produced content. These {{ic|PKGBUILD}}s are completely unofficial and have not been thoroughly vetted. Any use of the provided files is at your own risk.}}<br />
<br />
== Getting started ==<br />
<br />
Users can search and download [[PKGBUILD]]s from the [https://aur.archlinux.org AUR Web Interface]. These {{ic|PKGBUILD}}s can be built into installable packages using ''makepkg'', then installed using ''pacman''.<br />
<br />
* Ensure the {{Grp|base-devel}} package group is installed in full ({{ic|pacman -S --needed base-devel}}).<br />
* Glance over the [[#FAQ]] for answers to the most common questions.<br />
* You may wish to adjust {{ic|/etc/makepkg.conf}} to optimize the build process to your system prior to building packages from the AUR. A significant improvement in package build times can be realized on systems with multi-core processors by adjusting the {{ic|MAKEFLAGS}} variable, by using multiple cores for compression, or by using different compression algorithm. Users can also enable hardware-specific compiler optimizations via the {{ic|CFLAGS}} variable. See [[makepkg#Tips and tricks]] for more information.<br />
<br />
It is also possible to interact with the AUR through SSH: type {{ic|ssh aur@aur.archlinux.org help}} for a list of available commands.<br />
<br />
== History ==<br />
<br />
In the beginning, there was {{ic|<nowiki>ftp://ftp.archlinux.org/incoming</nowiki>}}, and people contributed by simply uploading the [[PKGBUILD]], the needed supplementary files, and the built package itself to the server. The package and associated files remained there until a [[Package Maintainer]] saw the program and adopted it.<br />
<br />
Then the Trusted User Repositories were born. Certain individuals in the community were allowed to host their own repositories for anyone to use. The AUR expanded on this basis, with the aim of making it both more flexible and more usable. In fact, the AUR maintainers are still referred to as TUs (Trusted Users).<br />
<br />
Between 2015-06-08 and 2015-08-08, the AUR transitioned from version 3.5.1 to 4.0.0, introducing the use of Git repositories for publishing the {{ic|PKGBUILD}}s.<br />
Existing packages were dropped unless manually migrated to the new infrastructure by their maintainers.<br />
<br />
=== Git repositories for AUR3 packages ===<br />
<br />
The [https://github.com/aur-archive AUR Archive] on GitHub has a repository for every package that was in AUR 3 at the time of the migration.<br />
Alternatively, there is the [https://github.com/felixonmars/aur3-mirror/ aur3-mirror] repository which provides the same.<br />
<br />
== Installing and upgrading packages ==<br />
<br />
Installing packages from the AUR is a relatively simple process. Essentially:<br />
<br />
# Acquire the build files, including the [[PKGBUILD]] and possibly other required files, like [[systemd]] units and patches (often not the actual code).<br />
# Verify that the {{ic|PKGBUILD}} and accompanying files are not malicious or untrustworthy.<br />
# Run {{ic|makepkg}} in the directory where the files are saved. This will download the code, compile it, and package it.<br />
# Run {{ic|pacman -U ''package_file''}} to install the package onto your system.<br />
<br />
{{Note|The AUR is unsupported, so any packages you install are ''your responsibility'' to update, not pacman's. If packages in the official repositories are updated, you will need to rebuild any AUR packages that depend on those libraries.}}<br />
<br />
=== Prerequisites ===<br />
<br />
First ensure that the necessary tools are installed by [[install]]ing the {{grp|base-devel}} group in full which includes {{pkg|make}} and other tools needed for compiling from source.<br />
<br />
{{Tip|Use the {{ic|--needed}} flag when installing the {{grp|base-devel}} group to skip packages you already have instead of reinstalling them.}}<br />
<br />
{{Note|Packages in the AUR assume that the {{grp|base-devel}} group is installed, i.e. they do not list the group's members as build dependencies explicitly.}}<br />
<br />
Next choose an appropriate build directory. A build directory is simply a directory where the package will be made or "built" and can be any directory. The examples in the following sections will use {{ic|~/builds}} as the build directory.<br />
<br />
=== Acquire build files ===<br />
<br />
Locate the package in the AUR. This is done using the search field at the top of the [https://aur.archlinux.org/ AUR home page]. Clicking the application's name in the search list brings up an information page on the package. Read through the description to confirm that this is the desired package, note when the package was last updated, and read any comments.<br />
<br />
There are several methods for acquiring the build files for a package:<br />
<br />
* Clone its [[git]] repository, labeled "Git Clone URL" in the "Package Details" on its AUR page. This is the preferred method, an advantage of which is that you can easily get updates to the package via {{ic|git pull}}. <br />
: {{bc|$ git clone <nowiki>https://aur.archlinux.org/</nowiki>''package_name''.git}}<br />
* Download a snapshot, either by clicking the "Download snapshot" link under "Package Actions" on the right hand side of its AUR page, or in a terminal: <br />
: {{bc|$ curl -L -O <nowiki>https://aur.archlinux.org/cgit/aur.git/snapshot/</nowiki>''package_name''.tar.gz}} <br />
: {{Note|The snapshot file is compressed, and must be extracted (preferably in a directory set aside for AUR builds): {{ic|tar -xvf ''package_name''.tar.gz}}}}<br />
* Use the read-only mirror [https://github.com/archlinux/aur archlinux/aur on GitHub], where every package is located in a branch. It is recommended to clone only a single branch (the whole repository is too big and performance would be low). You can do this with one of the following two methods:<br />
** Use {{ic|1=git clone --single-branch}}: {{bc | $ git clone --branch ''branch_name''/''package_name'' --single-branch https://github.com/archlinux/aur}}<br />
** Do a [[Git#Partially fetching the repository|partial clone]] of this repository ({{ic|1=git clone --depth=1}}) and [[Git#Getting other branches|add branches]] selectively: <br />
:: {{bc|<nowiki>$ git clone --depth=1 https://github.com/archlinux/aur;</nowiki> cd aur<br>$ git remote set-branches --add origin ''package_name''<br>$ git fetch<br>$ git checkout ''package_name''}}<br />
<br />
=== Acquire a PGP public key if needed ===<br />
<br />
Check if a signature file in the form of ''.sig'' or ''.asc'' is part of the [[PKGBUILD]] source array. If that is the case, then acquire one of the public keys listed in the PKGBUILD [[PKGBUILD#validpgpkeys|validpgpkeys]] array. Refer to [[makepkg#Signature checking]] for more information.<br />
<br />
=== Build the package ===<br />
<br />
Change directories to the directory containing the package's [[PKGBUILD]].<br />
<br />
$ cd ''package_name''<br />
<br />
{{Warning|Carefully check the {{ic|PKGBUILD}}, any ''.install'' files, and any other files in the package's git repository for malicious or dangerous commands. If in doubt, do not build the package, and [[General troubleshooting#Additional support|seek advice]] on the forums or mailing list. Malicious code has been found in packages before. [https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/thread/FFCMZGL4UQODYKZGUY7KTN3UBF3XN66P/]}}<br />
<br />
View the contents of all provided files. For example, to use the pager ''less'' to view {{ic|PKGBUILD}}, do:<br />
<br />
$ less PKGBUILD<br />
<br />
{{Tip|If you are updating a package, you may want to look at the changes since the last commit.<br />
* To view changes since the last git commit, you can use {{ic|git show}}.<br />
* To view changes since the last commit using ''vimdiff'', do {{ic|git difftool @~..@ vimdiff}}. The advantage of ''vimdiff'' is that you view the entire contents of each file along with indicators on what has changed.}}<br />
<br />
Make the package. After manually confirming the contents of the files, run [[makepkg]] as a normal user. Some helpful flags:<br />
<br />
* {{ic|-s}}/{{ic|--syncdeps}} automatically resolves and installs any dependencies with [[pacman]] before building. If the package depends on other AUR packages, you will need to manually install them first.<br />
* {{ic|-i}}/{{ic|--install}} installs the package if it is built successfully. This lets you skip the next step that is usually done manually.<br />
* {{ic|-r}}/{{ic|--rmdeps}} removes build-time dependencies after the build, as they are no longer needed. However, these dependencies may need to be reinstalled the next time the package is updated.<br />
* {{ic|-c}}/{{ic|--clean}} cleans up temporary build files after the build, as they are no longer needed. These files are usually needed only when debugging the build process.<br />
<br />
{{Tip|Use {{ic| git clean -dfX}} to delete all files that are ignored by git, thus deleting all previously built package files.}}<br />
<br />
=== Install the package ===<br />
<br />
The package can now be installed with pacman:<br />
<br />
# pacman -U ''package_name''-''version''-''architecture''.pkg.tar.zst<br />
<br />
{{Note|<br />
* If you have changed your {{ic|PKGEXT}} in {{ic|makepkg.conf}}, the name of the package file may be slightly different.<br />
* The above example is only a brief summary of the build process. It is '''highly''' recommended to read the [[makepkg]] and [[ABS]] articles for more details.<br />
}}<br />
<br />
=== Upgrading packages ===<br />
<br />
In the directory containing the package's [[PKGBUILD]], you must first update the files and changes by using the command <br />
<br />
$ git pull<br />
<br />
then follow the previous build and install instructions.<br />
<br />
== Account status ==<br />
<br />
=== Suspension ===<br />
<br />
When editing a user as a Trusted User, the Suspended field can be set, which suspends the target user. '''When a user is suspended, they cannot:'''<br />
<br />
* Login to https://aur.archlinux.org<br />
* Receive notifications<br />
* Interact with the git interface<br />
<br />
=== Inactivity ===<br />
<br />
When editing your own account or another as a Trusted User, the Inactive field can be set. Inactive accounts are used for two reasons:<br />
<br />
* Display the date someone was marked inactive on their account page<br />
* Generate a current count of active Trusted Users based on their inactivity for new proposals<br />
<br />
== Feedback ==<br />
<br />
=== Commenting on packages ===<br />
<br />
The [https://aur.archlinux.org AUR Web Interface] has a comments facility that allows users to provide suggestions and feedback on improvements to the [[PKGBUILD]] contributor.<br />
<br />
{{Tip|Avoid pasting patches or {{ic|PKGBUILD}}s into the comments section: they quickly become obsolete and just end up needlessly taking up lots of space. Instead, email those files to the maintainer, or even use a [[pastebin]].}}<br />
<br />
[https://python-markdown.github.io/ Python-Markdown] provides basic [[Wikipedia:Markdown|Markdown]] syntax to format comments.<br />
<br />
{{Note|<br />
* This implementation has some occasional [https://python-markdown.github.io/#differences differences] with the official [https://daringfireball.net/projects/markdown/syntax syntax rules].<br />
* Commit hashes to the [[Git]] repository of the package and references to [[Flyspray]] tickets are converted to links automatically.<br />
* Long comments are collapsed and can be expanded on demand.<br />
}}<br />
<br />
=== Voting for packages ===<br />
<br />
One of the easiest activities for '''all''' Arch users is to browse the AUR and '''vote''' for their favourite packages using the online interface. All packages are eligible for adoption by a TU for inclusion in the [[community repository]], and the vote count is one of the considerations in that process; it is in everyone's interest to vote!<br />
<br />
Sign up on the [https://aur.archlinux.org/ AUR website] to get a "Vote for this package" option while browsing packages. After signing up, it is also possible to vote from the commandline with {{AUR|aurvote}}, {{AUR|aurvote-git}} or {{AUR|aur-auto-vote-git}}.<br />
<br />
Alternatively, if you have set up [[AUR submission guidelines#Authentication|ssh authentication]], you can directly vote from the command line using your ssh key. This means that you will not need to save or type in your AUR password.<br />
<br />
$ ssh aur@aur.archlinux.org vote ''package_name''<br />
<br />
=== Flagging packages out-of-date ===<br />
<br />
First, you should flag the package ''out-of-date'' indicating details on why the package is outdated, preferably including links to the release announcement or the new release [[Archiving and compression#Archiving only|tarball]].<br />
<br />
You should also try to reach out to the maintainer directly by email. If there is no response from the maintainer after ''two weeks'', you can file an ''orphan'' request. See [[AUR submission guidelines#Requests]] for details.<br />
<br />
{{Note|[[VCS package guidelines|VCS packages]] are not considered out of date when the {{ic|pkgver}} changes; do not flag them as the maintainer will merely unflag the package and ignore you. AUR maintainers should not commit mere {{ic|pkgver}} bumps.}}<br />
<br />
== Debugging the package build process ==<br />
<br />
# Ensure your build environment is up-to-date by [[Pacman#Upgrading packages|upgrading]] before building anything.<br />
# Ensure you have the {{Grp|base-devel}} group installed.<br />
# Use the {{ic|-s}} option with {{ic|makepkg}} to check and install all dependencies needed before starting the build process.<br />
# Try the default [https://github.com/archlinux/svntogit-packages/blob/packages/pacman/trunk/makepkg.conf makepkg configuration].<br />
# See [[Makepkg#Troubleshooting]] for common issues.<br />
<br />
If you are having trouble building a package, first read its [[PKGBUILD]] and the comments on its AUR page.<br />
<br />
It is possible that a {{ic|PKGBUILD}} is broken for everyone. If you cannot figure it out on your own, report it to the maintainer (e.g. by [[#Commenting on packages|posting the errors you are getting in the comments on the AUR page]]). You may also seek help in the [https://bbs.archlinux.org/viewforum.php?id=38 AUR Issues, Discussion & PKGBUILD Requests forum].<br />
<br />
The reason might not be trivial after all. Custom {{ic|CFLAGS}}, {{ic|LDFLAGS}} and {{ic|MAKEFLAGS}} can cause failures. To avoid problems caused by your particular system configuration, build packages in a [[DeveloperWiki:Building in a clean chroot|clean chroot]]. If the build process still fails in a clean chroot, the issue is probably with the {{ic|PKGBUILD}}.<br />
<br />
{{Note|AUR maintainers can test building in a clean chroot using the {{ic|extra-x86_64-build}} build script from the {{Pkg|devtools}} package.}}<br />
<br />
See [[Creating packages#Checking package sanity]] about using {{ic|namcap}}. If you would like to have a {{ic|PKGBUILD}} reviewed, post it on the [https://lists.archlinux.org/mailman3/lists/aur-general.lists.archlinux.org/ aur-general mailing list] to get feedback from the TUs and fellow AUR members, or the [https://bbs.archlinux.org/viewforum.php?id=4 Creating & Modifying Packages forum]. You could also seek help in the [[IRC channel]] [ircs://irc.libera.chat/archlinux-aur #archlinux-aur] on the [https://libera.chat Libera Chat] network.<br />
<br />
== Submitting packages ==<br />
<br />
Users can share [[PKGBUILD]]s using the Arch User Repository. See [[AUR submission guidelines]] for details.<br />
<br />
== Web interface translation ==<br />
<br />
See [https://gitlab.archlinux.org/archlinux/aurweb/-/blob/master/doc/i18n.txt i18n.txt] in the AUR source tree for information about creating and maintaining translation of the [https://aur.archlinux.org AUR Web Interface].<br />
<br />
== FAQ ==<br />
<br />
=== What kind of packages are permitted on the AUR? ===<br />
<br />
The packages on the AUR are merely "build scripts", i.e. recipes to build binaries for [[pacman]]. For most cases, everything is permitted, subject to [[AUR submission guidelines#Rules of submission|usefulness and scope guidelines]], as long as you are in compliance with the licensing terms of the content. For other cases, where it is mentioned that "you may not link" to downloads, i.e. contents that are not redistributable, you may only use the file name itself as the source. This means and requires that users already have the restricted source in the build directory prior to building the package. When in doubt, ask.<br />
<br />
=== How can I vote for packages in the AUR? ===<br />
<br />
See [[#Voting for packages]].<br />
<br />
=== What is a Trusted User / TU? ===<br />
<br />
A [[AUR Trusted User guidelines|Trusted User]], in short TU, is a person who is chosen to oversee AUR and the [[community repository]]. They are the ones who maintain popular [[PKGBUILD]]s in ''community'', and overall keep the AUR running.<br />
<br />
=== What is the difference between the Arch User Repository and the community repository? ===<br />
<br />
The Arch User Repository is where all [[PKGBUILD]]s that users submit are stored, and must be built manually with [[makepkg]]. When {{ic|PKGBUILD}}s receive enough community interest and the support of a TU, they are moved into the [[community repository]] (maintained by the TUs), where the binary packages can be installed with [[pacman]].<br />
<br />
=== Foo in the AUR is outdated; what should I do? ===<br />
<br />
See [[#Flagging packages out-of-date]].<br />
<br />
In the meantime, you can try updating the package yourself by editing the [[PKGBUILD]] locally. Sometimes, updates do not require changes to the build or package process, in which case simply updating the {{ic|pkgver}} or {{ic|source}} array is sufficient.<br />
<br />
=== Foo in the AUR does not compile when I run makepkg; what should I do? ===<br />
<br />
You are probably missing something trivial; see [[#Debugging the package build process]].<br />
<br />
=== ERROR: One or more PGP signatures could not be verified!; what should I do? ===<br />
<br />
Most likely, you do not have the required public key(s) in your personal keyring to verify downloaded files. See [[Makepkg#Signature checking]] for details.<br />
<br />
=== How do I create a PKGBUILD? ===<br />
<br />
Consult the [[AUR submission guidelines#Rules of submission]], then see [[creating packages]].<br />
<br />
=== I have a PKGBUILD I would like to submit; can someone check it to see if there are any errors? ===<br />
<br />
There are several channels available to submit your package for review; see [[#Debugging the package build process]].<br />
<br />
=== How to get a PKGBUILD into the community repository? ===<br />
<br />
Usually, at least 10 votes are required for something to move into [[community repository|community]]. However, if a [[TU]] wants to support a package, it will often be found in the repository.<br />
<br />
Reaching the required minimum of votes is not the only requirement; there has to be a TU willing to maintain the package. TUs are not required to move a package into the ''community'' repository even if it has thousands of votes.<br />
<br />
Usually, when a very popular package stays in the AUR, it is because:<br />
<br />
* Arch Linux already has another version of a package in the repositories<br />
* Its license prohibits redistribution<br />
* It helps retrieve user-submitted [[PKGBUILD]]s. [[AUR helpers]] are [https://bbs.archlinux.org/viewtopic.php?pid=828310#p828310 unsupported] by definition.<br />
<br />
See also [[AUR Trusted User guidelines#Rules for packages entering the %5Bcommunity%5D repository|Rules for Packages Entering the community Repo]].<br />
<br />
=== How can I speed up repeated build processes? ===<br />
<br />
See [[Makepkg#Improving compile times]].<br />
<br />
=== What is the difference between foo and foo-git packages? ===<br />
<br />
Many AUR packages come in "stable" release and "unstable" development versions. Development packages usually have a [[VCS package guidelines#Guidelines|suffix]] denoting their [[Version Control System]] and are not intended for regular use, but may offer new features or bugfixes. Because these packages only download the latest available source when you execute {{ic|makepkg}}, their {{ic|pkgver()}} in the AUR does not reflect upstream changes. Likewise, these packages cannot perform an authenticity checksum on any [[VCS package guidelines#VCS sources|VCS source]].<br />
<br />
See also [[System maintenance#Use proven software packages]].<br />
<br />
=== Why has foo disappeared from the AUR? ===<br />
<br />
It is possible the package has been adopted by a [[TU]] and is now in the [[community repository]].<br />
<br />
Packages may be deleted if they did not fulfill the [[AUR submission guidelines#Rules of submission|rules of submission]]. See the [https://lists.archlinux.org/archives/list/aur-requests@lists.archlinux.org/ aur-requests archives] for the reason for deletion.<br />
<br />
{{Note|The git repository for a deleted package typically remains available. See [[AUR submission guidelines#Requests]] for details.}}<br />
<br />
If the package used to exist in AUR3, it might not have been [https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/message/NJN6TPVF6MWGF7BCHBMBYFZ5JDAPOHP5/ migrated to AUR4]. See the [[#Git repositories for AUR3 packages]] where these are preserved.<br />
<br />
=== How do I find out if any of my installed packages disappeared from AUR? ===<br />
<br />
The simplest way is to check the HTTP status of the package's AUR page:<br />
<br />
$ comm -23 <(pacman -Qqm | sort) <(curl https://aur.archlinux.org/packages.gz | gzip -cd | sort)<br />
<br />
=== How can I obtain a list of all AUR packages? ===<br />
<br />
* https://aur.archlinux.org/packages.gz<br />
* Use {{ic|aurpkglist}} from {{aur|python3-aur}}<br />
<br />
== See also ==<br />
<br />
* [https://aur.archlinux.org AUR Web Interface]<br />
* [https://lists.archlinux.org/mailman3/lists/aur-general.lists.archlinux.org/ AUR Mailing List]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=DeveloperWiki_talk:Building_in_a_clean_chroot&diff=758452DeveloperWiki talk:Building in a clean chroot2022-11-29T14:06:40Z<p>Cmsigler: Add request for note about AUR pkg clean chroot building</p>
<hr />
<div>== Deleting a chroot ==<br />
<br />
It's not written in the page so I'll write it here: Just delete the $CHROOT folder (Unless it's btrfs). [[User:Tharbad|Tharbad]] ([[User talk:Tharbad|talk]]) 03:05, 12 May 2019 (UTC)<br />
<br />
== More info needed RE: archbuild ==<br />
<br />
With the semi-recent changes to chroot building and the addition of the '''archbuild''' convenience script, using a custom repo within your build chroot is no longer supported. It is therefore required to create (or symlink) a pacman.conf to <code>/usr/share/devtools/pacman-<some_name>.conf</code>, and then run <code><some_name>-x86_64-build</code> to build packages in a chroot that will have access to your custom repo.<br />
<br />
For more background, see [https://old.reddit.com/r/archlinux/comments/e70fjf/makechrootpkg_no_longer_seems_to_support_local/ this reddit post] and [https://old.reddit.com/r/archlinux/comments/e70fjf/makechrootpkg_no_longer_seems_to_support_local/fa10s82/ this response thread].<br />
<br />
[[User:Terminalmage|Terminalmage]] ([[User talk:Terminalmage|talk]]) 01:39, 10 December 2019 (UTC)<br />
<br />
== <s>Convenience way for AUR packages</s> ==<br />
<br />
The section “Convenience way” does not mention building AUR packages. I guess extra-x86_64-build would work for these, too, or am I wrong? [[User:Buzo|Buzo]] ([[User talk:Buzo|talk]]) 17:39, 12 May 2020 (UTC)<br />
<br />
:It works on anything with a {{ic|PKGBUILD}}. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 14:42, 22 February 2022 (UTC)<br />
<br />
== <s>"Setting up a chroot" in "Classic way" is missing the base package/group</s> ==<br />
<br />
$ mkarchroot $CHROOT/root base-devel<br />
should be<br />
$ mkarchroot $CHROOT/root base base-devel<br />
<br />
:No, it actually should not. Containers have no requirements to install {{ic|base}}. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 14:42, 22 February 2022 (UTC)<br />
<br />
:: Wouldn't that mean PKGBUILDs need packages in base as dependencies? Because otherwise they fail to build in the chroot, right? [[User:Rac27|Rac27]] ([[User talk:Rac27|talk]]) 14:54, 22 February 2022 (UTC)<br />
<br />
::: Right, unless those dependencies are pulled in by other packages. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 09:44, 23 February 2022 (UTC)<br />
<br />
== Note about BTRFS subvolumes ==<br />
<br />
There is a note attached underneath the first `mkarchchroot` command that says:<br />
<br />
On btrfs, the chroot is created as a subvolume, so you have to remove it by removing the subvolume by running btrfs subvolume delete $CHROOT/root as root.<br />
<br />
According to this: https://wiki.archlinux.org/title/Btrfs#Deleting_a_subvolume , BTRFS subvolumes can just be removed normally with `rmdir` or `rm`. Should this note be removed?<br />
<br />
{{Unsigned|2022-11-21T13:29:06|Saltedcoffii}}<br />
<br />
== AUR pkg clean chroot test building ==<br />
<br />
As I am not able to edit this DeveloperWiki page, I would like to recommend adding a short Note under section "Convenience way", something like:<br />
<br />
{{Note|AUR maintainers can test building of an AUR pkg in a clean chroot using the {{ic|extra-x86_64-build}} build script.}}<br />
<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 14:06, 29 November 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Arch_Linux_on_a_VPS&diff=757967Arch Linux on a VPS2022-11-25T11:31:06Z<p>Cmsigler: Update information on servercheap.net to cloudfanatic.net</p>
<hr />
<div>[[Category:Installation process]]<br />
[[Category:Virtualization]]<br />
[[ja:Arch Linux VPS]]<br />
{{Related articles start}}<br />
{{Related|Server}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Virtual private server]]:<br />
<br />
:Virtual private server (VPS) is a term used by Internet hosting services to refer to a virtual machine. The term is used for emphasizing that the virtual machine, although running in software on the same physical computer as other customers' virtual machines, is in many respects functionally equivalent to a separate physical computer, is dedicated to the individual customer's needs, has the privacy of a separate physical computer, and can be configured to run server software.<br />
<br />
This article discusses the use of Arch Linux on Virtual Private Servers, and includes some fixes and installation instructions specific to VPSes.<br />
<br />
== Official Arch Linux cloud image ==<br />
<br />
Arch Linux provides an official cloud image as part of the [https://gitlab.archlinux.org/archlinux/arch-boxes arch-boxes project]. The image comes with [[Cloud-init]] preinstalled and should work with most cloud providers.<br />
<br />
The image can be downloaded from the mirrors under the {{ic|images}} directory. Instructions for tested providers is listed below:<br />
<br />
{| class="wikitable"<br />
! Provider !! Locations !! Note<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || Global ||<br />
# Find the cloud image on a mirror, ex: https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2<br />
# Add the image as a custom image by [https://www.digitalocean.com/docs/images/custom-images/quickstart/#upload-images importing it]<br />
# [https://www.digitalocean.com/docs/images/custom-images/quickstart/#create-droplets-from-custom-images Create a new VM from the custom image]<br />
# SSH to the VM: {{ic|ssh root@''ip''}}<br />
|-<br />
| [https://www.hetzner.com/cloud Hetzner Cloud] || Nuremberg, Falkenstein (Germany), Helsinki (Finland) ||<br />
# Create a new VM with this user data:{{bc|#cloud-config<br><nowiki>vendor_data: {'enabled': false}</nowiki>}}<sup>The {{ic|vendor_data}} from Hetzner overrides the {{ic|distro}} and sets the default user to {{ic|root}} without setting {{ic|disable_root: false}}, meaning you can not login</sup><br />
# Boot the VM in rescue mode<br />
# SSH to the VM and download the cloud image from a mirror, ex: {{ic|curl -O <nowiki>https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2</nowiki>}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-cloudimg.qcow2 /dev/sda}}<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@''ip''}}<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/global-infrastructure/ Multiple international locations] ||<br />
# Create a new VM and select Arch as the distribution (to use the Linode-provided image, stop here; otherwise proceed with the rest of the steps)<br />
# Boot the VM in rescue mode<br />
# Connect to the VM via the Lish console and download the basic image from a mirror, ex: {{ic|curl -O <nowiki>https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-basic.qcow2</nowiki>}}<br />
# Install the qemu-utils package: {{ic|apt update && apt install qemu-utils}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-basic.qcow2 /dev/sda}}<br />
# In the Linode manager, go to the VM's configurations menu and edit the configuration to change the kernel option to "Direct Disk"<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@''ip''}}<br />
|-<br />
| [https://www.proxmox.com/ Proxmox] || N/A ||<br />
# Create a new VM<br />
# Select "Do not use any media" in OS section.<br />
# Remove created hard disk from your VM after VM creation completes.<br />
# Add the downloaded image to your VM using {{ic|qm importdisk}}, ex:<br> {{ic|qm importdisk 100 Arch-Linux-x86_64-cloudimg.qcow2 local}}<br />
# Add a cloudinit drive and make your configurations in Cloud-Init section.<br />
# Start the VM!<br />
|-<br />
| [https://www.kimsufi.com/en/dedicated-servers Kimsufi] [https://eco.ovhcloud.com/en-gb OVH Eco] || Canada, France || Paraphrasing the [https://docs.ovh.com/gb/en/dedicated/bringyourownimage official documentation]: <br />
# Navigate to the [https://www.ovh.com/manager/#/dedicated/server Dedicated Servers] section in your OVH management panel, then select the server you want to deploy Arch Linux to.<br />
# Click the ... button next to "Last operating system (OS) installed by OVHcloud" and choose "Install"<br />
# Select "Install from custom image"<br />
# For "Image URL" put {{ic|<nowiki>https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2</nowiki>}}<br />
# For "Image type" select {{ic|qcow2}}<br />
# For "Checksum type" select {{ic|sha256}}<br />
# For "Image checksum" put the fingerprint value from https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2.SHA256<br />
# Enable "ConfigDrive" to enter "Server host name" and your public "SSH key" (both are mandatory for Arch Cloud Init install)<br />
# Click "Install the system"<br />
# Wait (it takes a while) for an email from OVH titled "Installation of your image", it will say "Congratulations! Your dedicated server has just been installed! Connect to your server with ssh key provided during your installation."<br />
# Use {{ic|ssh arch@''ip''}} to log in.<br />
|-<br />
|}<br />
<br />
== Providers that offer Arch Linux ==<br />
<br />
{{Style|Inconsistency, some language issues}}<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|This list is for providers where Arch Linux can be installed in a supported way. This excludes any container-based hosting such as LXC or Docker as well as OpenVZ.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Archiso release !! Virtualization !! Locations !! Notes<br />
|-<br />
| [https://www.cloudfanatic.net Cloudfanatic] || Latest || KVM || Chicago, IL, USA; Los Angeles, CA, USA; Raleigh, NC, USA; Phoenix, AZ, USA || Arch Linux supported. [https://cloudfanatic.net/operating-systems.php] Other Linux, as well as BSD, distributions available.<br />
|-<br />
| [https://hetzner.com/cloud Hetzner] || 2020.06.01 || KVM || Nuremberg, DE; Falkenstein, DE; Helsinki, FI || You cannot choose Arch Linux directly on the order form. Order Ubuntu or something first, then go to ISO Images, mount Arch Linux, reboot server, and log in to web console to complete installation.<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/distributions Latest] || KVM || [https://www.linode.com/global-infrastructure/ Multiple international locations] || Linode instances are configured to run Arch's kernel by default. Linode provides custom kernels which can be selected in the manager settings. There are also community-supported kernels in the AUR, such as {{AUR|linux-linode}}.<br />
|-<br />
| [https://www.netcup.eu/ Netcup] || 2020.09.01 || KVM || Germany (DE) || German language: [https://www.netcup.de/ Netcup]<br />
|-<br />
| [https://monovm.com MonoVM ] || Latest || VMware || USA - Canada - Netherlands - Germany - UK - France - Denmark || VMware Based VPS Server Provider. <br />
|-<br />
| [https://www.ramnode.com/ RamNode] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=48 2016.01.01] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=39 SSD and SSD Cached:] [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=52 KVM] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=50 Alblasserdam, NL; Atlanta, GA-US; Los Angeles, CA-US; New York, NY-US; Seattle, WA-US] || You can request Host/CPU passthrough with KVM service.[https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=66] Frequent use of discount promotions.[https://twitter.com/search?q=ramnode%20code&src=typd], Must install Arch manually from an ISO using VNC viewer.<br />
|-<br />
| ServerCheap || &mdash; || &mdash; || &mdash; || See Cloudfanatic<br />
|-<br />
| [https://www.transip.eu/ TransIP] || latest || [https://www.transip.eu/vps/infrastructure/ KVM] || Amsterdam, NL || For latest image, submit ticket. Also registrar.<br />
|-<br />
| [https://www.vultr.com/ Vultr] || Latest || KVM || [https://www.vultr.com/locations/ Multiple International locations] || When deploying a new server just select the Arch install ISO from Vultr ISO Library. Then just manually run through the standard [[Installation guide|Arch installation guide]].<br />
|-<br />
| [https://www.misaka.io/ Misaka.io / zeptoVM] || Latest || KVM || [https://www.misaka.io/services/mc2 Multiple International locations] || Images are built every 24 hrs<br />
|-<br />
| [https://v.ps/ V.PS] || Latest || KVM || [https://v.ps/speedtest/ Multiple International locations] || Images are built monthly<br />
|-<br />
|}<br />
<br />
== Providers with Community provided Arch Linux support ==<br />
<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|Arch Linux is not officially supported by these providers. The images and scripts listed here are created by the community.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Installation Type !! Locations !! Notes<br />
|- <br />
| [https://aws.amazon.com/ Amazon Web Services] || [[Arch Linux AMIs for Amazon Web Services|Custom Images]] || Global ||<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || [https://gitlab.archlinux.org/archlinux/arch-boxes#cloud-image Official Arch cloud image], [https://github.com/gh2o/digitalocean-debian-to-arch Conversion Script] or [https://github.com/robsonde/digitalocean_builder Custom Image] || Global || IPv6 does not work with custom images, but works with conversion script<br />
|-<br />
| [https://cloud.google.com/ Google Cloud Platform] || [https://github.com/GoogleCloudPlatform/compute-archlinux-image-builder Custom Image] || Global || <br />
|-<br />
|}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Arch_Linux_on_a_VPS&diff=752818Arch Linux on a VPS2022-10-14T12:50:21Z<p>Cmsigler: Add interim note that Servercheap.net is renaming to Cloudfanatic.net</p>
<hr />
<div>[[Category:Installation process]]<br />
[[Category:Virtualization]]<br />
[[ja:Arch Linux VPS]]<br />
{{Related articles start}}<br />
{{Related|Server}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Virtual private server]]:<br />
<br />
:Virtual private server (VPS) is a term used by Internet hosting services to refer to a virtual machine. The term is used for emphasizing that the virtual machine, although running in software on the same physical computer as other customers' virtual machines, is in many respects functionally equivalent to a separate physical computer, is dedicated to the individual customer's needs, has the privacy of a separate physical computer, and can be configured to run server software.<br />
<br />
This article discusses the use of Arch Linux on Virtual Private Servers, and includes some fixes and installation instructions specific to VPSes.<br />
<br />
== Official Arch Linux cloud image ==<br />
<br />
Arch Linux provides an official cloud image as part of the [https://gitlab.archlinux.org/archlinux/arch-boxes arch-boxes project]. The image comes with [[Cloud-init]] preinstalled and should work with most cloud providers.<br />
<br />
The image can be downloaded from the mirrors under the {{ic|images}} directory. Instructions for tested providers is listed below:<br />
<br />
{| class="wikitable"<br />
! Provider !! Locations !! Note<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || Global ||<br />
# Find the cloud image on a mirror, ex: https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2<br />
# Add the image as a custom image by [https://www.digitalocean.com/docs/images/custom-images/quickstart/#upload-images importing it]<br />
# [https://www.digitalocean.com/docs/images/custom-images/quickstart/#create-droplets-from-custom-images Create a new VM from the custom image]<br />
# SSH to the VM: {{ic|ssh root@''ip''}}<br />
|-<br />
| [https://www.hetzner.com/cloud Hetzner Cloud] || Nuremberg, Falkenstein (Germany), Helsinki (Finland) ||<br />
# Create a new VM with this user data:{{bc|#cloud-config<br><nowiki>vendor_data: {'enabled': false}</nowiki>}}<sup>The {{ic|vendor_data}} from Hetzner overrides the {{ic|distro}} and sets the default user to {{ic|root}} without setting {{ic|disable_root: false}}, meaning you can not login</sup><br />
# Boot the VM in rescue mode<br />
# SSH to the VM and download the cloud image from a mirror, ex: {{ic|curl -O <nowiki>https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2</nowiki>}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-cloudimg.qcow2 /dev/sda}}<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@''ip''}}<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/global-infrastructure/ Multiple international locations] ||<br />
# Create a new VM and select Arch as the distribution (to use the Linode-provided image, stop here; otherwise proceed with the rest of the steps)<br />
# Boot the VM in rescue mode<br />
# Connect to the VM via the Lish console and download the basic image from a mirror, ex: {{ic|curl -O <nowiki>https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-basic.qcow2</nowiki>}}<br />
# Install the qemu-utils package: {{ic|apt update && apt install qemu-utils}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-basic.qcow2 /dev/sda}}<br />
# In the Linode manager, go to the VM's configurations menu and edit the configuration to change the kernel option to "Direct Disk"<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@''ip''}}<br />
|-<br />
| [https://www.proxmox.com/ Proxmox] || N/A ||<br />
# Create a new VM<br />
# Select "Do not use any media" in OS section.<br />
# Remove created hard disk from your VM after VM creation completes.<br />
# Add the downloaded image to your VM using {{ic|qm importdisk}}, ex:<br> {{ic|qm importdisk 100 Arch-Linux-x86_64-cloudimg.qcow2 local}}<br />
# Add a cloudinit drive and make your configurations in Cloud-Init section.<br />
# Start the VM!<br />
|-<br />
|}<br />
<br />
== Providers that offer Arch Linux ==<br />
<br />
{{Style|Inconsistency, some language issues}}<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|This list is for providers where Arch Linux can be installed in a supported way. This excludes any container-based hosting such as LXC or Docker as well as OpenVZ.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Archiso release !! Virtualization !! Locations !! Notes<br />
|-<br />
| [https://hetzner.com/cloud Hetzner] || 2020.06.01 || KVM || Nuremberg, DE; Falkenstein, DE; Helsinki, FI || You cannot choose Arch Linux directly on the order form. Order Ubuntu or something first, then go to ISO Images, mount Arch Linux, reboot server, and log in to web console to complete installation.<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/distributions Latest] || KVM || [https://www.linode.com/global-infrastructure/ Multiple international locations] || Linode instances are configured to run Arch's kernel by default. Linode provides custom kernels which can be selected in the manager settings. There are also community-supported kernels in the AUR, such as {{AUR|linux-linode}}.<br />
|-<br />
| [https://www.netcup.eu/ Netcup] || 2020.09.01 || KVM || Germany (DE) || German language: [https://www.netcup.de/ Netcup]<br />
|-<br />
| [https://monovm.com MonoVM ] || Latest || VMware || USA - Canada - Netherlands - Germany - UK - France - Denmark || VMware Based VPS Server Provider. <br />
|-<br />
| [https://www.ramnode.com/ RamNode] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=48 2016.01.01] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=39 SSD and SSD Cached:] [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=52 KVM] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=50 Alblasserdam, NL; Atlanta, GA-US; Los Angeles, CA-US; New York, NY-US; Seattle, WA-US] || You can request Host/CPU passthrough with KVM service.[https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=66] Frequent use of discount promotions.[https://twitter.com/search?q=ramnode%20code&src=typd], Must install Arch manually from an ISO using VNC viewer.<br />
|-<br />
| [https://www.servercheap.net Server Cheap] || Latest || KVM || Chicago, Illinois, USA || Arch Linux available on request.[https://servercheap.net/operating-systems.php] Windows, BSD, and many Linux distribution hosting options. NOTE: "Servercheap is now Cloudfanatic.net and more..", announcement dated 13 October 2022<br />
|-<br />
| [https://www.transip.eu/ TransIP] || latest || [https://www.transip.eu/vps/infrastructure/ KVM] || Amsterdam, NL || For latest image, submit ticket. Also registrar.<br />
|-<br />
| [https://www.vultr.com/ Vultr] || Latest || KVM || [https://www.vultr.com/locations/ Multiple International locations] || When deploying a new server just select the Arch install ISO from Vultr ISO Library. Then just manually run through the standard [[Installation guide|Arch installation guide]].<br />
|-<br />
| [https://www.misaka.io/ Misaka.io / zeptoVM] || Latest || KVM || [https://www.misaka.io/services/mc2 Multiple International locations] || Images are built every 24 hrs<br />
|-<br />
| [https://v.ps/ V.PS] || Latest || KVM || [https://v.ps/speedtest/ Multiple International locations] || Images are built monthly<br />
|-<br />
|}<br />
<br />
== Providers with Community provided Arch Linux support ==<br />
<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|Arch Linux is not officially supported by these providers. The images and scripts listed here are created by the community.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Installation Type !! Locations !! Notes<br />
|- <br />
| [https://aws.amazon.com/ Amazon Web Services] || [[Arch Linux AMIs for Amazon Web Services|Custom Images]] || Global ||<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || [https://gitlab.archlinux.org/archlinux/arch-boxes#cloud-image Official Arch cloud image], [https://github.com/gh2o/digitalocean-debian-to-arch Conversion Script] or [https://github.com/robsonde/digitalocean_builder Custom Image] || Global || IPv6 does not work with custom images, but works with conversion script<br />
|-<br />
| [https://cloud.google.com/ Google Cloud Platform] || [https://github.com/GoogleCloudPlatform/compute-archlinux-image-builder Custom Image] || Global || <br />
|-<br />
|}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/personal_editing_sandbox&diff=752817User:Cmsigler/personal editing sandbox2022-10-14T12:45:27Z<p>Cmsigler: Nope, didn't work :(</p>
<hr />
<div># Item 1<br />
# Item 2<br />
# Item 3<br><br><br />
# Item 4<br />
<br />
# Item 5<br />
<br />
{{ic|1=,smb=}}<br />
<br />
{{ic|<nowiki>,smb=</nowiki>}}<br />
<br />
Trying out MediaWiki to see if the Cite extension works<ref>Sez ME!!!</ref><br />
WAAAAHHH... Waaaaaaaah... No soap :(</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/personal_editing_sandbox&diff=752816User:Cmsigler/personal editing sandbox2022-10-14T12:44:50Z<p>Cmsigler: MediaWiki Cite extension test</p>
<hr />
<div># Item 1<br />
# Item 2<br />
# Item 3<br><br><br />
# Item 4<br />
<br />
# Item 5<br />
<br />
{{ic|1=,smb=}}<br />
<br />
{{ic|<nowiki>,smb=</nowiki>}}<br />
<br />
Trying out MediaWiki to see if the Cite extension works<ref>Sez ME!!!</ref></div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=727765Pacman/Tips and tricks2022-04-27T16:45:50Z<p>Cmsigler: Minor -- dropped a period</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[de:Pacman-Tipps]]<br />
[[es:Pacman (Español)/Tips and tricks]]<br />
[[fa:Pacman tips]]<br />
[[fr:Pacman (Français)/Tips and tricks]]<br />
[[ja:Pacman ヒント]]<br />
[[pt:Pacman (Português)/Tips and tricks]]<br />
[[ru:Pacman (Русский)/Tips and tricks]]<br />
[[zh-hans:Pacman (简体中文)/Tips and tricks]]<br />
{{Related articles start}}<br />
{{Related|Mirrors}}<br />
{{Related|Creating packages}}<br />
{{Related articles end}}<br />
For general methods to improve the flexibility of the provided tips or ''pacman'' itself, see [[Core utilities]] and [[Bash]].<br />
<br />
== Maintenance ==<br />
<br />
{{Expansion|{{ic|1=Usage=}} introduced with ''pacman'' 4.2, see [http://allanmcrae.com/2014/12/pacman-4-2-released/]}}<br />
<br />
{{Note|Instead of using ''comm'' (which requires sorted input with ''sort'') in the sections below, you may also use {{ic|grep -Fxf}} or {{ic|grep -Fxvf}}.}}<br />
<br />
See also [[System maintenance]].<br />
<br />
=== Listing packages ===<br />
<br />
==== With version ====<br />
<br />
You may want to get the list of installed packages with their version, which is useful when reporting bugs or discussing installed packages.<br />
<br />
* List all explicitly installed packages: {{ic|pacman -Qe}}.<br />
* List all packages in the [[package group]] named {{ic|''group''}}: {{ic|pacman -Sg ''group''}}<br />
* List all foreign packages (typically manually downloaded and installed or packages removed from the repositories): {{ic|pacman -Qm}}.<br />
* List all native packages (installed from the sync database): {{ic|pacman -Qn}}.<br />
* List all explicitly installed native packages (i.e. present in the sync database) that are not direct or optional dependencies: {{ic|pacman -Qent}}.<br />
* List packages by regex: {{ic|pacman -Qs ''regex''}}.<br />
* List packages by regex with custom output format (needs {{Pkg|expac}}): {{ic|expac -s "%-30n %v" ''regex''}}.<br />
<br />
==== With size ====<br />
<br />
Figuring out which packages are largest can be useful when trying to free space on your hard drive. There are two options here: get the size of individual packages, or get the size of packages and their dependencies.<br />
<br />
===== Individual packages =====<br />
<br />
The following command will list all installed packages and their individual sizes:<br />
<br />
$ LC_ALL=C pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -h<br />
<br />
===== Packages and dependencies =====<br />
<br />
To list package sizes with their dependencies,<br />
<br />
* Install {{Pkg|expac}} and run {{ic|expac -H M '%m\t%n' {{!}} sort -h}}.<br />
* Run {{Pkg|pacgraph}} with the {{ic|-c}} option.<br />
<br />
To list the download size of several packages (leave {{ic|''packages''}} blank to list all packages):<br />
<br />
$ expac -S -H M '%k\t%n' ''packages''<br />
<br />
To list explicitly installed packages not in the [[meta package]] {{Pkg|base}} nor [[package group]] {{Grp|base-devel}} with size and description:<br />
<br />
$ expac -H M "%011m\t%-20n\t%10d" $(comm -23 <(pacman -Qqen | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)) | sort -n<br />
<br />
To list the packages marked for upgrade with their download size<br />
<br />
$ expac -S -H M '%k\t%n' $(pacman -Qqu) | sort -sh<br />
<br />
==== By date ====<br />
<br />
To list the 20 last installed packages with {{Pkg|expac}}, run:<br />
<br />
$ expac --timefmt='%Y-%m-%d %T' '%l\t%n' | sort | tail -n 20<br />
<br />
or, with seconds since the epoch (1970-01-01 UTC):<br />
<br />
$ expac --timefmt=%s '%l\t%n' | sort -n | tail -n 20<br />
<br />
==== Not in a specified group, repository or meta package ====<br />
<br />
{{Note|To get a list of packages installed as dependencies but no longer required by any installed package, see [[#Removing unused packages (orphans)]].<br />
}}<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} [[meta package]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <(expac -l '\n' '%E' base | sort)<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} meta package or {{Grp|base-devel}} [[package group]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
List all installed packages unrequired by other packages, and which are not in the {{Pkg|base}} meta package or {{Grp|base-devel}} package group:<br />
<br />
$ comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; echo base; } | sort -u)<br />
<br />
As above, but with descriptions:<br />
<br />
$ expac -H M '%-20n\t%10d' $(comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; echo base; } | sort -u))<br />
<br />
List all installed packages that are ''not'' in the specified repository ''repo_name''<br />
<br />
$ comm -23 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all installed packages that are in the ''repo_name'' repository:<br />
<br />
$ comm -12 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all packages on the Arch Linux ISO that are not in the {{Pkg|base}} meta package:<br />
<br />
<nowiki>$ comm -23 <(curl https://gitlab.archlinux.org/archlinux/archiso/-/raw/master/configs/releng/packages.x86_64) <(expac -l '\n' '%E' base | sort)</nowiki><br />
<br />
{{Tip|Alternatively, use {{ic|combine}} (instead of {{ic|comm}}) from the {{Pkg|moreutils}} package which has a syntax that is easier to remember. See {{man|1|combine}}.}}<br />
<br />
==== Development packages ====<br />
<br />
To list all development/unstable packages, run:<br />
<br />
$ pacman -Qq | grep -Ee '-(bzr|cvs|darcs|git|hg|svn)$'<br />
<br />
=== Browsing packages ===<br />
<br />
To browse all installed packages with an instant preview of each package:<br />
<br />
$ pacman -Qq | fzf --preview 'pacman -Qil {}' --layout=reverse --bind 'enter:execute(pacman -Qil {} | less)'<br />
<br />
This uses [[fzf]] to present a two-pane view listing all packages with package info shown on the right.<br />
<br />
Enter letters to filter the list of packages; use arrow keys (or {{ic|Ctrl-j}}/{{ic|Ctrl-k}}) to navigate; press {{ic|Enter}} to see package info under ''less''.<br />
<br />
To browse all packages currently known to ''pacman'' (both installed and not yet installed) in a similar way, using fzf, use:<br />
<br />
$ pacman -Slq | fzf --preview 'pacman -Si {}' --layout=reverse<br />
<br />
The navigational keybindings are the same, although {{ic|Enter}} will not work in the same way.<br />
<br />
=== Listing files owned by a package with size ===<br />
<br />
This one might come in handy if you have found that a specific package uses a huge amount of space and you want to find out which files make up the most of that.<br />
<br />
$ pacman -Qlq ''package'' | grep -v '/$' | xargs -r du -h | sort -h<br />
<br />
=== Identify files not owned by any package ===<br />
<br />
If your system has stray files not owned by any package (a common case if you do not [[Enhance system stability#Use the package manager to install software|use the package manager to install software]]), you may want to find such files in order to clean them up.<br />
<br />
One method is to use {{ic|pacreport --unowned-files}} as the root user from {{Pkg|pacutils}} which will list unowned files among other details.<br />
<br />
Another is to list all files of interest and check them against ''pacman'':<br />
<br />
# find /etc /usr /opt | LC_ALL=C pacman -Qqo - 2>&1 >&- >/dev/null | cut -d ' ' -f 5-<br />
<br />
{{Tip|The {{Pkg|lostfiles}} script performs similar steps, but also includes an extensive blacklist to remove common false positives from the output.}}<br />
<br />
=== Tracking unowned files created by packages ===<br />
<br />
Most systems will slowly collect several [http://ftp.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html#S3-RPM-INSIDE-FLIST-GHOST-DIRECTIVE ghost] files such as state files, logs, indexes, etc. through the course of usual operation.<br />
<br />
{{ic|pacreport}} from {{Pkg|pacutils}} can be used to track these files and their associations via {{ic|/etc/pacreport.conf}} (see {{man|1|pacreport|FILES}}).<br />
<br />
An example may look something like this (abridged):<br />
<br />
{{hc|/etc/pacreport.conf|2=<br />
[Options]<br />
IgnoreUnowned = usr/share/applications/mimeinfo.cache<br />
<br />
[PkgIgnoreUnowned]<br />
alsa-utils = var/lib/alsa/asound.state<br />
bluez = var/lib/bluetooth<br />
ca-certificates = etc/ca-certificates/trust-source/*<br />
dbus = var/lib/dbus/machine-id<br />
glibc = etc/ld.so.cache<br />
grub = boot/grub/*<br />
linux = boot/initramfs-linux.img<br />
pacman = var/lib/pacman/local<br />
update-mime-database = usr/share/mime/magic<br />
}}<br />
<br />
Then, when using {{ic|pacreport --unowned-files}} as the root user, any unowned files will be listed if the associated package is no longer installed (or if any new files have been created).<br />
<br />
Additionally, [https://github.com/CyberShadow/aconfmgr aconfmgr] ({{AUR|aconfmgr-git}}) allows tracking modified and orphaned files using a configuration script.<br />
<br />
=== Removing unused packages (orphans) ===<br />
<br />
For recursively removing orphans and their configuration files:<br />
<br />
# pacman -Qtdq | pacman -Rns -<br />
<br />
If no orphans were found, the output is {{ic|error: argument '-' specified with empty stdin}}. This is expected as no arguments were passed to {{ic|pacman -Rns}}.<br />
<br />
{{Note|The arguments {{ic|-Qt}} list only true orphans. To include packages which are ''optionally'' required by another package, pass the {{ic|-t}} flag twice (''i.e.'', {{ic|-Qtt}}).}}<br />
<br />
=== Removing everything but essential packages ===<br />
<br />
If it is ever necessary to remove all packages except the essentials packages, one method is to set the installation reason of the non-essential ones as dependency and then remove all unnecessary dependencies.<br />
<br />
First, for all the packages "explicitly installed", change their installation reason to "installed as a dependency":<br />
<br />
# pacman -D --asdeps $(pacman -Qqe)<br />
<br />
Then, change the installation reason to "explicitly installed" of only the essential packages, those you '''do not''' want to remove, in order to avoid targeting them:<br />
<br />
# pacman -D --asexplicit base linux linux-firmware<br />
<br />
{{Note|<br />
* Additional packages can be added to the above command in order to avoid being removed. See [[Installation guide#Install essential packages]] for more info on other packages that may be necessary for a fully functional base system.<br />
* This will also select the bootloader's package for removal. The system should still be bootable, but the boot parameters might not be changeable without it.<br />
}}<br />
<br />
Finally, follow the instructions in [[#Removing unused packages (orphans)]] to remove all packages that are "installed as a dependency".<br />
<br />
=== Getting the dependencies list of several packages ===<br />
<br />
Dependencies are alphabetically sorted and doubles are removed.<br />
<br />
{{Note|To only show the tree of local installed packages, use {{ic|pacman -Qi}}.}}<br />
<br />
$ LC_ALL=C pacman -Si ''packages'' | awk -F'[:<=>]' '/^Depends/ {print $2}' | xargs -n1 | sort -u<br />
<br />
Alternatively, with {{Pkg|expac}}: <br />
<br />
$ expac -l '\n' %E -S ''packages'' | sort -u<br />
<br />
=== Listing changed backup files ===<br />
<br />
{{Accuracy|What is the connection of this section to [[System backup]]? Listing modified "backup files" does not show files which are not tracked by ''pacman''.|section=Warning about listing changed backup files}}<br />
<br />
If you want to back up your system configuration files, you could copy all files in {{ic|/etc/}} but usually you are only interested in the files that you have changed. Modified [[Pacnew and Pacsave files#Package backup files|backup files]] can be viewed with the following command:<br />
<br />
# pacman -Qii | awk '/^MODIFIED/ {print $2}'<br />
<br />
Running this command with root permissions will ensure that files readable only by root (such as {{ic|/etc/sudoers}}) are included in the output.<br />
<br />
{{Tip|See [[#Listing all changed files from packages]] to list all changed files ''pacman'' knows about, not only backup files.}}<br />
<br />
=== Back up the pacman database ===<br />
<br />
The following command can be used to back up the local ''pacman'' database:<br />
<br />
$ tar -cjf pacman_database.tar.bz2 /var/lib/pacman/local<br />
<br />
Store the backup ''pacman'' database file on one or more offline media, such as a USB stick, external hard drive, or CD-R.<br />
<br />
The database can be restored by moving the {{ic|pacman_database.tar.bz2}} file into the {{ic|/}} directory and executing the following command:<br />
<br />
# tar -xjvf pacman_database.tar.bz2<br />
<br />
{{Note|If the ''pacman'' database files are corrupted, and there is no backup file available, there exists some hope of rebuilding the ''pacman'' database. Consult [[#Restore pacman's local database]].}}<br />
<br />
{{Tip|The {{AUR|pakbak-git}} package provides a script and a [[systemd]] service to automate the task. Configuration is possible in {{ic|/etc/pakbak.conf}}.}}<br />
<br />
=== Check changelogs easily ===<br />
<br />
When maintainers update packages, commits are often commented in a useful fashion. Users can quickly check these from the command line by installing {{AUR|pacolog}}. This utility lists recent commit messages for packages from the official repositories or the AUR, by using {{ic|pacolog ''package''}}.<br />
<br />
== Installation and recovery ==<br />
<br />
Alternative ways of getting and restoring packages.<br />
<br />
=== Installing packages from a CD/DVD or USB stick ===<br />
<br />
{{Merge|#Custom local repository|Use as an example and avoid duplication}}<br />
<br />
To download packages, or groups of packages:<br />
<br />
# cd ~/Packages<br />
# pacman -Syw --cachedir . base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Pacman, which will reference the host installation by default, will not properly resolve and download existing dependencies. In cases where all packages and dependencies are wanted, it is recommended to create a temporary blank DB and reference it with {{ic|--dbpath}}:<br />
<br />
# mkdir /tmp/blankdb<br />
# pacman -Syw --cachedir . --dbpath /tmp/blankdb base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Then you can burn the "Packages" folder to a CD/DVD or transfer it to a USB stick, external HDD, etc.<br />
<br />
To install:<br />
<br />
'''1.''' Mount the media:<br />
<br />
# mkdir /mnt/repo<br />
# mount /dev/sr0 /mnt/repo #For a CD/DVD.<br />
# mount /dev/sdxY /mnt/repo #For a USB stick.<br />
<br />
'''2.''' Edit {{ic|pacman.conf}} and add this repository ''before'' the other ones (e.g. extra, core, etc.). This is important. Do not just uncomment the one on the bottom. This way it ensures that the files from the CD/DVD/USB take precedence over those in the standard repositories:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
[custom]<br />
SigLevel = PackageRequired<br />
Server = file:///mnt/repo/Packages}}<br />
<br />
'''3.''' Finally, synchronize the ''pacman'' database to be able to use the new repository:<br />
<br />
# pacman -Syu<br />
<br />
=== Custom local repository ===<br />
<br />
Use the ''repo-add'' script included with ''pacman'' to generate a database for a personal repository. Use {{ic|repo-add --help}} for more details on its usage. <br />
A package database is a tar file, optionally compressed. Valid extensions are ''.db'' or ''.files'' followed by an archive extension of ''.tar'', ''.tar.gz'', ''.tar.bz2'', ''.tar.xz'', ''.tar.zst'', or ''.tar.Z''. The file does not need to exist, but all parent directories must exist.<br />
<br />
To add a new package to the database, or to replace the old version of an existing package in the database, run:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/package-1.0-1-x86_64.pkg.tar.xz''<br />
<br />
The database and the packages do not need to be in the same directory when using ''repo-add'', but keep in mind that when using ''pacman'' with that database, they should be together. Storing all the built packages to be included in the repository in one directory also allows to use shell glob expansion to add or update multiple packages at once:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/*.pkg.tar.xz''<br />
<br />
{{Warning|''repo-add'' adds the entries into the database in the same order as passed on the command line. If multiple versions of the same package are involved, care must be taken to ensure that the correct version is added last. In particular, note that lexical order used by the shell depends on the locale and differs from the {{man|8|vercmp}} ordering used by ''pacman''.}}<br />
<br />
If you are looking to support multiple architectures then precautions should be taken to prevent errors from occurring. Each architecture should have its own directory tree:<br />
<br />
{{hc|$ tree ~/customrepo/ {{!}} sed "s/$(uname -m)/''arch''/g"|<br />
/home/archie/customrepo/<br />
└── ''arch''<br />
├── customrepo.db -> customrepo.db.tar.xz<br />
├── customrepo.db.tar.xz<br />
├── customrepo.files -> customrepo.files.tar.xz<br />
├── customrepo.files.tar.xz<br />
└── personal-website-git-b99cce0-1-''arch''.pkg.tar.xz<br />
<br />
1 directory, 5 files<br />
}}<br />
<br />
The ''repo-add'' executable checks if the package is appropriate. If this is not the case you will be running into error messages similar to this:<br />
<br />
==> ERROR: '/home/archie/customrepo/''arch''/foo-''arch''.pkg.tar.xz' does not have a valid database archive extension.<br />
<br />
''repo-remove'' is used to remove packages from the package database, except that only package names are specified on the command line.<br />
<br />
$ repo-remove ''/path/to/repo.db.tar.gz pkgname''<br />
<br />
Once the local repository database has been created, add the repository to {{ic|pacman.conf}} for each system that is to use the repository. An example of a custom repository is in {{ic|pacman.conf}}. The repository's name is the database filename with the file extension omitted. In the case of the example above the repository's name would simply be ''repo''. Reference the repository's location using a {{ic|file://}} URL, or via FTP using ftp://localhost/path/to/directory.<br />
<br />
If willing, add the custom repository to the [[Unofficial user repositories|list of unofficial user repositories]], so that the community can benefit from it.<br />
<br />
=== Network shared pacman cache ===<br />
<br />
{{Merge|Package_Proxy_Cache|Same topic}}<br />
If you happen to run several Arch boxes on your LAN, you can share packages so that you can greatly decrease your download times. Keep in mind you should not share between different architectures (i.e. i686 and x86_64) or you will run into problems.<br />
<br />
==== Read-only cache ====<br />
<br />
{{Note|1=If pacman fails to download 3 packages from the server, it will use another mirror instead. See https://bbs.archlinux.org/viewtopic.php?id=268066.}}<br />
<br />
If you are looking for a quick solution, you can simply run a [https://gist.github.com/willurd/5720255 basic temporary webserver] which other computers can use as their first mirror.<br />
<br />
First of all, make pacman databases available into the folder you will serve:<br />
<br />
# ln -s /var/lib/pacman/sync/*.db /var/cache/pacman/pkg/<br />
<br />
Then start serving this folder. For example, with [[Python]] [https://docs.python.org/3/library/http.server.html#http-server-cli http.server] module:<br />
$ python -m http.server -d /var/cache/pacman/pkg/<br />
<br />
{{Tip|By default, Python {{ic|http.server}} listens on port {{ic|8000}}. To use another port, simply add it as an argument:<br />
<br />
$ python -m http.server -d /var/cache/pacman/pkg/ 8080<br />
}}<br />
<br />
Then [[textedit|edit]] {{ic|/etc/pacman.d/mirrorlist}} on each client machine to add this server as the top entry:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|2=<br />
Server = http://''server-ip'':''port''<br />
...<br />
}}<br />
<br />
{{Warning|Do '''not''' append {{ic|/repos/$repo/os/$arch}} to this custom server like for other entries, as this hierarchy does not exist and therefore queries will fail.}}<br />
<br />
If looking for a more standalone solution, {{Pkg|darkhttpd}} offers a very minimal webserver. Replace the previous {{ic|python}} command with e.g.:<br />
<br />
$ sudo -u http darkhttpd /var/cache/pacman/pkg --no-server-id<br />
<br />
You could also run darkhttpd as a ''systemd'' service for convenience: see [[Systemd#Writing unit files]].<br />
<br />
{{Pkg|miniserve}}, a web small server written in Rust, can also be used:<br />
<br />
$ miniserve /var/cache/pacman/pkg<br />
<br />
Then edit {{ic|/etc/pacman.d/mirrorlist}} as above with the first url miniserve is available at.<br />
<br />
If you are already running a web server for some other purpose, you might wish to reuse that as your local repository server instead. For example, if you already serve a site with [[nginx]], you can add an ''nginx'' server block listening on port 8080:<br />
<br />
{{hc|/etc/nginx/nginx.conf|<br />
server {<br />
listen 8080;<br />
root /var/cache/pacman/pkg;<br />
server_name myarchrepo.localdomain;<br />
try_files $uri $uri/;<br />
}<br />
}}<br />
<br />
Remember to [[restart]] {{ic|nginx.service}} after making this change.<br />
<br />
{{Tip|Whichever web server you use, make sure the firewall configuration (if any) allows the configured port to be reached by the desired traffic, and disallows any undesired traffic. See [[Security#Network and firewalls]].}}<br />
<br />
==== Overlay mount of read-only cache ====<br />
<br />
It is possible to use one machine on a local network as a read-only package cache by [[Overlay_filesystem|overlay mounting]] its {{ic|/var/cache/pacman/pkg}} directory. Such a configuration is advantageous if this server has installed on it a reasonably comprehensive selection of up-to-date packages which are also used by other boxes. This is useful for maintaining a number of machines at the end of a low bandwidth upstream connection.<br />
<br />
As an example, to use this method:<br />
<br />
# mkdir /tmp/remote_pkg /mnt/workdir_pkg /tmp/pacman_pkg<br />
# sshfs ''remote_username''@''remote_pkgcache_addr'':/var/cache/pacman/pkg /tmp/remote_pkg -C<br />
# mount -t overlay overlay -o lowerdir=/tmp/remote_pkg,upperdir=/var/cache/pacman/pkg,workdir=/mnt/workdir_pkg /tmp/pacman_pkg<br />
<br />
{{Note|The working directory must be an empty directory on the same mounted device as the upper directory. See [[Overlay filesystem#Usage]].}}<br />
<br />
{{Tip|1=If listing the {{ic|/tmp/pacman_pkg}} overlay directory gives errors, e.g., "Stale file handle", try overlay mounting with options {{ic|1=-o redirect_dir=off -o index=off}}. }}<br />
<br />
After this, run ''pacman'' using the option {{ic|--cachedir /tmp/pacman_pkg}}, e.g.:<br />
<br />
# pacman -Syu --cachedir /tmp/pacman_pkg<br />
<br />
==== Distributed read-only cache ====<br />
<br />
There are Arch-specific tools for automatically discovering other computers on your network offering a package cache. Try {{Pkg|pacredir}}, [[pacserve]], {{AUR|pkgdistcache}}, or {{AUR|paclan}}. pkgdistcache uses Avahi instead of plain UDP which may work better in certain home networks that route instead of bridge between WiFi and Ethernet.<br />
<br />
Historically, there was [https://bbs.archlinux.org/viewtopic.php?id=64391 PkgD] and [https://github.com/toofishes/multipkg multipkg], but they are no longer maintained.<br />
<br />
==== Read-write cache ====<br />
<br />
In order to share packages between multiple computers, simply share {{ic|/var/cache/pacman/}} using any network-based mount protocol. This section shows how to use [[SSHFS]] to share a package cache plus the related library-directories between multiple computers on the same local network. Keep in mind that a network shared cache can be slow depending on the file-system choice, among other factors.<br />
<br />
First, install any network-supporting filesystem packages: {{Pkg|sshfs}}, {{Pkg|curlftpfs}}, {{Pkg|samba}} or {{Pkg|nfs-utils}}.<br />
<br />
{{Tip|<br />
* To use ''sshfs'', consider reading [[Using SSH Keys]].<br />
* By default, ''smbfs'' does not serve filenames that contain colons, which results in the client downloading the offending package afresh. To prevent this, use the {{ic|mapchars}} mount option on the client.<br />
}}<br />
<br />
Then, to share the actual packages, mount {{ic|/var/cache/pacman/pkg}} from the server to {{ic|/var/cache/pacman/pkg}} on every client machine.<br />
<br />
{{Warning|Do not make {{ic|/var/cache/pacman/pkg}} or any of its ancestors (e.g., {{ic|/var}}) a symlink. Pacman expects these to be directories. When ''pacman'' re-installs or upgrades itself, it will remove the symlinks and create empty directories instead. However during the transaction ''pacman'' relies on some files residing there, hence breaking the update process. Refer to {{Bug|50298}} for further details.}}<br />
<br />
==== two-way with rsync ====<br />
<br />
Another approach in a local environment is [[rsync]]. Choose a server for caching and enable the [[Rsync#As a daemon|rsync daemon]]. On clients synchronize two-way with this share via the rsync protocol. Filenames that contain colons are no problem for the rsync protocol.<br />
<br />
Draft example for a client, using {{ic|uname -m}} within the share name ensures an architecture-dependent sync:<br />
# rsync rsync://server/share_$(uname -m)/ /var/cache/pacman/pkg/ ...<br />
# pacman ...<br />
# paccache ...<br />
# rsync /var/cache/pacman/pkg/ rsync://server/share_$(uname -m)/ ...<br />
<br />
==== Dynamic reverse proxy cache using nginx ====<br />
<br />
[[nginx]] can be used to proxy package requests to official upstream mirrors and cache the results to the local disk. All subsequent requests for that package will be served directly from the local cache, minimizing the amount of internet traffic needed to update a large number of computers. <br />
<br />
In this example, the cache server will run at {{ic|<nowiki>http://cache.domain.example:8080/</nowiki>}} and store the packages in {{ic|/srv/http/pacman-cache/}}. <br />
<br />
Install [[nginx]] on the computer that is going to host the cache. Create the directory for the cache and adjust the permissions so nginx can write files to it:<br />
<br />
# mkdir /srv/http/pacman-cache<br />
# chown http:http /srv/http/pacman-cache<br />
<br />
Use the [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/c54eca4776ff162ab492117b80be4df95880d0e2/nginx.conf nginx pacman cache config] as a starting point for {{ic|/etc/nginx/nginx.conf}}. Check that the {{ic|resolver}} directive works for your needs. In the upstream server blocks, configure the {{ic|proxy_pass}} directives with addresses of official mirrors, see examples in the configuration file about the expected format. Once you are satisfied with the configuration file [[Nginx#Running|start and enable nginx]].<br />
<br />
In order to use the cache each Arch Linux computer (including the one hosting the cache) must have the following line at the top of the {{ic|mirrorlist}} file:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|<nowiki><br />
Server = http://cache.domain.example:8080/$repo/os/$arch<br />
...<br />
</nowiki>}}<br />
<br />
{{Note| You will need to create a method to clear old packages, as the cache directory will continue to grow over time. {{ic|paccache}} (which is provided by {{Pkg|pacman-contrib}}) can be used to automate this using retention criteria of your choosing. For example, {{ic|find /srv/http/pacman-cache/ -type d -exec paccache -v -r -k 2 -c {} \;}} will keep the last 2 versions of packages in your cache directory.}}<br />
<br />
==== Pacoloco proxy cache server ====<br />
<br />
[https://github.com/anatol/pacoloco Pacoloco] is an easy-to-use proxy cache server for ''pacman'' repositories. It also allows [https://github.com/anatol/pacoloco/commit/048b09956b0d8ef71c0ed1f804fd332d9ab5e3c8 automatic prefetching] of the cached packages.<br />
<br />
It can be installed as {{Pkg|pacoloco}}. Open the configuration file and add ''pacman'' mirrors:<br />
<br />
{{hc|/etc/pacoloco.yaml|<nowiki><br />
port: 9129<br />
repos:<br />
mycopy:<br />
urls:<br />
- http://mirror.lty.me/archlinux<br />
- http://mirrors.kernel.org/archlinux<br />
</nowiki>}}<br />
<br />
[[Restart]] {{ic|pacoloco.service}} and the proxy repository will be available at {{ic|http://''myserver'':9129/repo/mycopy}}.<br />
<br />
==== Flexo proxy cache server ====<br />
<br />
[https://github.com/nroi/flexo Flexo] is yet another proxy cache server for ''pacman'' repositories. Flexo is available as {{AUR|flexo-git}}. Once installed, [[start]] the {{ic|flexo.service}} unit.<br />
<br />
Flexo runs on port {{ic|7878}} by default. Enter {{ic|1=Server = http://''myserver'':7878/$repo/os/$arch}} to the top of your {{ic|/etc/pacman.d/mirrorlist}} so that ''pacman'' downloads packages via Flexo.<br />
<br />
==== Synchronize pacman package cache using synchronization programs ====<br />
<br />
Use [[Syncthing]] or [[Resilio Sync]] to synchronize the ''pacman'' cache folders (i.e. {{ic|/var/cache/pacman/pkg}}).<br />
<br />
==== Preventing unwanted cache purges ====<br />
<br />
By default, {{ic|pacman -Sc}} removes package tarballs from the cache that correspond to packages that are not installed on the machine the command was issued on. Because ''pacman'' cannot predict what packages are installed on all machines that share the cache, it will end up deleting files that should not be.<br />
<br />
To clean up the cache so that only ''outdated'' tarballs are deleted, add this entry in the {{ic|[options]}} section of {{ic|/etc/pacman.conf}}:<br />
<br />
CleanMethod = KeepCurrent<br />
<br />
=== Recreate a package from the file system ===<br />
<br />
To recreate a package from the file system, use {{AUR|fakepkg}}. Files from the system are taken as they are, hence any modifications will be present in the assembled package. Distributing the recreated package is therefore discouraged; see [[ABS]] and [[Arch Linux Archive]] for alternatives.<br />
<br />
=== List of installed packages ===<br />
<br />
Keeping a list of all explicitly installed packages can be useful to backup a system or quicken the installation of a new one:<br />
<br />
$ pacman -Qqe > pkglist.txt<br />
<br />
{{Note|<br />
* With option {{ic|-t}}, the packages already required by other explicitly installed packages are not mentioned. If reinstalling from this list they will be installed but as dependencies only.<br />
* With option {{ic|-n}}, foreign packages (e.g. from [[AUR]]) would be omitted from the list.<br />
* Use {{ic|comm -13 <(pacman -Qqdt {{!}} sort) <(pacman -Qqdtt {{!}} sort) > optdeplist.txt}} to also create a list of the installed optional dependencies which can be reinstalled with {{ic|--asdeps}}.<br />
* Use {{ic|pacman -Qqem > foreignpkglist.txt}} to create the list of AUR and other foreign packages that have been explicitly installed.}}<br />
<br />
To keep an up-to-date list of explicitly installed packages (e.g. in combination with a versioned {{ic|/etc/}}), you can set up a [[Pacman#Hooks|hook]]. Example:<br />
<br />
[Trigger]<br />
Operation = Install<br />
Operation = Remove<br />
Type = Package<br />
Target = *<br />
<br />
[Action]<br />
When = PostTransaction<br />
Exec = /bin/sh -c '/usr/bin/pacman -Qqe > /etc/pkglist.txt'<br />
<br />
=== Install packages from a list ===<br />
<br />
To install packages from a previously saved list of packages, while not reinstalling previously installed packages that are already up-to-date, run:<br />
<br />
# pacman -S --needed - < pkglist.txt<br />
<br />
However, it is likely foreign packages such as from the AUR or installed locally are present in the list. To filter out from the list the foreign packages, the previous command line can be enriched as follows:<br />
<br />
# pacman -S --needed $(comm -12 <(pacman -Slq | sort) <(sort pkglist.txt))<br />
<br />
Eventually, to make sure the installed packages of your system match the list and remove all the packages that are not mentioned in it:<br />
<br />
# pacman -Rsu $(comm -23 <(pacman -Qq | sort) <(sort pkglist.txt))<br />
<br />
{{Tip|These tasks can be automated. See {{AUR|bacpac}}, {{AUR|packup}}, {{AUR|pacmanity}}, and {{AUR|pug}} for examples.}}<br />
<br />
=== Listing all changed files from packages ===<br />
<br />
If you are suspecting file corruption (e.g. by software/hardware failure), but are unsure if files were corrupted, you might want to compare with the hash sums in the packages. This can be done with {{Pkg|pacutils}}:<br />
<br />
# paccheck --md5sum --quiet<br />
<br />
For recovery of the database see [[#Restore pacman's local database]]. The {{ic|mtree}} files can also be [[#Viewing a single file inside a .pkg file|extracted as {{ic|.MTREE}} from the respective package files]].<br />
<br />
{{Note|This should '''not''' be used as is when suspecting malicious changes! In this case security precautions such as using a live medium and an independent source for the hash sums are advised.}}<br />
<br />
=== Reinstalling all packages ===<br />
<br />
To reinstall all native packages, use:<br />
<br />
# pacman -Qqn | pacman -S -<br />
<br />
Foreign (AUR) packages must be reinstalled separately; you can list them with {{ic|pacman -Qqm}}.<br />
<br />
Pacman preserves the [[installation reason]] by default.<br />
<br />
{{Warning|To force all packages to be overwritten, use {{ic|1=--overwrite=*}}, though this should be an absolute last resort. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== Restore pacman's local database ===<br />
<br />
See [[pacman/Restore local database]].<br />
<br />
=== Recovering a USB key from existing install ===<br />
<br />
If you have Arch installed on a USB key and manage to mess it up (e.g. removing it while it is still being written to), then it is possible to re-install all the packages and hopefully get it back up and working again (assuming USB key is mounted in {{ic|/newarch}})<br />
<br />
# pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman<br />
<br />
=== Viewing a single file inside a .pkg file ===<br />
<br />
For example, if you want to see the contents of {{ic|/etc/systemd/logind.conf}} supplied within the {{Pkg|systemd}} package:<br />
<br />
$ bsdtar -xOf /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz etc/systemd/logind.conf<br />
<br />
Or you can use {{Pkg|vim}} to browse the archive:<br />
<br />
$ vim /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz<br />
<br />
=== Find applications that use libraries from older packages ===<br />
<br />
Already running processes do not automatically notice changes caused by updates. Instead, they continue using old library versions. That may be undesirable, due to potential issues related to security vulnerabilities or other bugs, and version incompatibility.<br />
<br />
Processes depending on updated libraries may be found using either {{pkg|htop}}, which highlights the names of the affected programs, or with a snippet based on {{pkg|lsof}}, which also prints the names of the libraries:<br />
<br />
# lsof +c 0 | grep -w DEL | awk '1 { print $1 ": " $NF }' | sort -u<br />
<br />
This solution will only detect files, that are normally kept opened by running processes, which basically limits it to shared libraries ({{ic|.so}} files). It may miss some dependencies, like those of Java or Python applications.<br />
<br />
=== Installing only content in required languages ===<br />
<br />
Many packages attempt to install documentation and translations in several languages. Some programs are designed to remove such unnecessary files, such as {{AUR|localepurge}}, which runs after a package is installed to delete the unneeded locale files. A more direct approach is provided through the {{ic|NoExtract}} directive in {{ic|pacman.conf}}, which prevent these files from ever being installed.<br />
<br />
{{Warning|1=Some users noted that removing locales has resulted in [[Special:Permalink/460285#Dangerous NoExtract example|unintended consequences]], even under [https://bbs.archlinux.org/viewtopic.php?id=250846 Xorg].}}<br />
<br />
The example below installs English (US) files, or none at all:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
NoExtract = usr/share/help/* !usr/share/help/C/*<br />
NoExtract = usr/share/gtk-doc/html/*<br />
NoExtract = usr/share/locale/* usr/share/X11/locale/*/* usr/share/i18n/locales/* opt/google/chrome/locales/* !usr/share/X11/locale/C/*<br />
NoExtract = !*locale*/en*/* !usr/share/*locale*/locale.*<br />
NoExtract = !usr/share/*locales/en_?? !usr/share/*locales/i18n* !usr/share/*locales/iso*<br />
NoExtract = usr/share/i18n/charmaps/* !usr/share/i18n/charmaps/UTF-8.gz<br />
NoExtract = !usr/share/*locales/trans*<br />
NoExtract = usr/share/man/* !usr/share/man/man*<br />
NoExtract = usr/share/vim/vim*/lang/*<br />
NoExtract = usr/lib/libreoffice/help/en-US/*<br />
NoExtract = usr/share/kbd/locale/*<br />
NoExtract = usr/share/*/translations/*.qm usr/share/*/nls/*.qm usr/share/qt/translations/*.pak !*/en-US.pak # Qt apps<br />
NoExtract = usr/share/*/locales/*.pak opt/*/locales/*.pak usr/lib/*/locales/*.pak !*/en-US.pak # Electron apps<br />
NoExtract = opt/onlyoffice/desktopeditors/dictionaries/* !opt/onlyoffice/desktopeditors/dictionaries/en_US/*<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/locale/* !*/en.json<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/resources/help/* !*/help/en/*<br />
NoExtract = opt/onlyoffice/desktopeditors/converter/empty/*/*<br />
NoExtract = usr/share/ibus/dicts/emoji-*.dict !usr/share/ibus/dicts/emoji-en.dict<br />
}}<br />
<br />
=== Installing packages on bad connection ===<br />
<br />
When trying to install a package from a bad connection (e.g. a train using a cell phone), use the {{ic|--disable-download-timeout}} option to lessen the chance of receiving errors such as: <br />
<br />
error: failed retrieving file […] Operation too slow. Less than 1 bytes/sec transferred the last 10 seconds<br />
<br />
or<br />
<br />
error: failed retrieving file […] Operation timed out after 10014 milliseconds with 0 out of 0 bytes received<br />
<br />
== Performance ==<br />
<br />
=== Download speeds ===<br />
<br />
When downloading packages ''pacman'' uses the mirrors in the order they are in {{ic|/etc/pacman.d/mirrorlist}}. The mirror which is at the top of the list by default however may not be the fastest for you. To select a faster mirror, see [[Mirrors]].<br />
<br />
Pacman's speed in downloading packages can also be improved by using a different application to download packages, instead of ''pacman''<nowiki/>'s built-in file downloader, or by [[pacman#Enabling parallel downloads|enabling parallel downloads]].<br />
<br />
In all cases, make sure you have the latest ''pacman'' before doing any modifications.<br />
<br />
# pacman -Syu<br />
<br />
==== Powerpill ====<br />
<br />
[[Powerpill]] is a ''pacman'' wrapper that uses parallel and segmented downloading to try to speed up downloads for ''pacman''.<br />
<br />
==== wget ====<br />
<br />
This is also very handy if you need more powerful proxy settings than ''pacman''<nowiki/>'s built-in capabilities. <br />
<br />
To use {{ic|wget}}, first [[install]] the {{Pkg|wget}} package then modify {{ic|/etc/pacman.conf}} by uncommenting the following line in the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/wget --passive-ftp --show-progress -c -q -N %u<br />
<br />
Instead of uncommenting the {{ic|wget}} parameters in {{ic|/etc/pacman.conf}}, you can also modify the {{ic|wget}} configuration file directly (the system-wide file is {{ic|/etc/wgetrc}}, per user files are {{ic|$HOME/.wgetrc}}).<br />
<br />
==== aria2 ====<br />
<br />
[[aria2]] is a lightweight download utility with support for resumable and segmented HTTP/HTTPS and FTP downloads. aria2 allows for multiple and simultaneous HTTP/HTTPS and FTP connections to an Arch mirror, which should result in an increase in download speeds for both file and package retrieval.<br />
<br />
{{Note|Using aria2c in ''pacman''<nowiki/>'s XferCommand will '''not''' result in parallel downloads of multiple packages. Pacman invokes the XferCommand with a single package at a time and waits for it to complete before invoking the next. To download multiple packages in parallel, see [[Powerpill]].}}<br />
<br />
Install {{Pkg|aria2}}, then edit {{ic|/etc/pacman.conf}} by adding the following line to the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u<br />
<br />
{{Tip|1=[https://bbs.archlinux.org/viewtopic.php?pid=1491879#p1491879 This alternative configuration for using pacman with aria2] tries to simplify configuration and adds more configuration options.}}<br />
<br />
See {{man|1|aria2c|OPTIONS}} for used aria2c options.<br />
<br />
* {{ic|-d, --dir}}: The directory to store the downloaded file(s) as specified by ''pacman''.<br />
* {{ic|-o, --out}}: The output file name(s) of the downloaded file(s). <br />
* {{ic|%o}}: Variable which represents the local filename(s) as specified by ''pacman''.<br />
* {{ic|%u}}: Variable which represents the download URL as specified by ''pacman''.<br />
<br />
==== Other applications ====<br />
<br />
There are other downloading applications that you can use with ''pacman''. Here they are, and their associated XferCommand settings:<br />
<br />
* {{ic|snarf}}: {{ic|1=XferCommand = /usr/bin/snarf -N %u}}<br />
* {{ic|lftp}}: {{ic|1=XferCommand = /usr/bin/lftp -c pget %u}}<br />
* {{ic|axel}}: {{ic|1=XferCommand = /usr/bin/axel -n 2 -v -a -o %o %u}}<br />
* {{ic|hget}}: {{ic|1=XferCommand = /usr/bin/hget %u -n 2 -skip-tls false}} (please read the [https://github.com/huydx/hget documentation on the Github project page] for more info)<br />
* {{ic|saldl}}: {{ic|1=XferCommand = /usr/bin/saldl -c6 -l4 -s2m -o %o %u}} (please read the [https://saldl.github.io documentation on the project page] for more info)<br />
<br />
== Utilities ==<br />
<br />
* {{App|Lostfiles|Script that identifies files not owned by any package.|https://github.com/graysky2/lostfiles|{{Pkg|lostfiles}}}}<br />
* {{App|pacutils|Helper library for libalpm based programs.|https://github.com/andrewgregory/pacutils|{{Pkg|pacutils}}}}<br />
* {{App|[[pkgfile]]|Tool that finds what package owns a file.|https://github.com/falconindy/pkgfile|{{Pkg|pkgfile}}}}<br />
* {{App|pkgtools|Collection of scripts for Arch Linux packages.|https://github.com/Daenyth/pkgtools|{{AUR|pkgtools}}}}<br />
* {{App|pkgtop|Interactive package manager and resource monitor designed for the GNU/Linux.|https://github.com/orhun/pkgtop|{{AUR|pkgtop-git}}}}<br />
* {{App|[[Powerpill]]|Uses parallel and segmented downloading through [[aria2]] and [[Reflector]] to try to speed up downloads for ''pacman''.|https://xyne.dev/projects/powerpill/|{{AUR|powerpill}}}}<br />
* {{App|repoctl|Tool to help manage local repositories.|https://github.com/cassava/repoctl|{{AUR|repoctl}}}}<br />
* {{App|repose|An Arch Linux repository building tool.|https://github.com/vodik/repose|{{Pkg|repose}}}}<br />
* {{App|[[Snapper#Wrapping_pacman_transactions_in_snapshots|snap-pac]]|Make ''pacman'' automatically use snapper to create pre/post snapshots like openSUSE's YaST.|https://github.com/wesbarnett/snap-pac|{{Pkg|snap-pac}}}}<br />
* {{App|vrms-arch|A virtual Richard M. Stallman to tell you which non-free packages are installed.|https://github.com/orospakr/vrms-arch|{{AUR|vrms-arch-git}}}}<br />
<br />
=== Graphical ===<br />
<br />
{{Warning|PackageKit opens up system permissions by default, and is otherwise not recommended for general usage. See {{Bug|50459}} and {{Bug|57943}}.}}<br />
<br />
* {{App|Apper|Qt 5 application and package manager using PackageKit written in C++. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://userbase.kde.org/Apper|{{Pkg|apper}}}}<br />
* {{App|Deepin App Store|Third party app store for DDE built with DTK, using PackageKit. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://github.com/dekzi/dde-store|{{Pkg|deepin-store}}}}<br />
* {{App|Discover|Qt 5 application manager using PackageKit written in C++/QML. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://userbase.kde.org/Discover|{{Pkg|discover}}}}<br />
* {{App|GNOME PackageKit|GTK 3 package manager using PackageKit written in C.|https://freedesktop.org/software/PackageKit/|{{Pkg|gnome-packagekit}}}}<br />
* {{App|GNOME Software|GTK 3 application manager using PackageKit written in C. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://wiki.gnome.org/Apps/Software|{{Pkg|gnome-software}}}}<br />
* {{App|pcurses|Curses TUI ''pacman'' wrapper written in C++.|https://github.com/schuay/pcurses|{{AUR|pcurses}}}}<br />
* {{App|tkPacman|Tk pacman wrapper written in Tcl.|https://sourceforge.net/projects/tkpacman|{{AUR|tkpacman}}}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Talk:Pacman/Tips_and_tricks&diff=727764Talk:Pacman/Tips and tricks2022-04-27T16:41:16Z<p>Cmsigler: I think I messed up tagging nomenclature...</p>
<hr />
<div>== Leading slash ==<br />
<br />
[[Pacman/Tips_and_tricks#aria2]] doesn't work without leading slash, i.e. {{ic|-d /}} turning file names to {{ic|//var/cache/...}}. The article mentions this, but it doesn't mention why. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:28, 16 October 2015 (UTC)<br />
<br />
:You would have to go [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&diff=32104&oldid=30674 way] [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&diff=next&oldid=115292 back] to track this. It seems to have worked without {{ic|-d /}} even in 2006: [https://wiki.archlinux.org/index.php?title=Faster_Pacman_Downloads&oldid=15627], [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&oldid=17759]. <s>I guess that simply nobody asked the right question...</s> -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 12:30, 16 October 2015 (UTC)<br />
:Oops, it does ''not'' work without {{ic|-d /}}. Then the problem must be on aria's side, which expects a file name for the {{ic|-o}} option, which is then catenated with {{ic|-d}} into the full path. Assuming that {{ic|-d}} defaults to the cwd, {{ic|/var/cache/}} would appear twice in the result. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 12:43, 16 October 2015 (UTC)<br />
<br />
== pacman cache ==<br />
<br />
I still think we should warn people not to symlink /var or anything under it. It leaves the whole system unusable because if the cache disappears during a pacman transaction, you're left with missing /usr/lib libraries, and nothing works, including pacman itself. This is a serious enough problem that it can take hours to figure out how to recover. If the wiki had mentioned this problem it would have saved me a lot of time and effort, and I'm not the only one who has run in to this. It is not, however, considered a bug. See https://bugs.archlinux.org/task/50298. [[User:JimRees|JimRees]] ([[User talk:JimRees|talk]]) 23:15, 29 April 2017 (UTC)<br />
<br />
:This revisions says that: [https://wiki.archlinux.org/index.php?title=Pacman%2FTips_and_tricks&type=revision&diff=475454&oldid=475438]. But to make it more clear: [https://wiki.archlinux.org/index.php?title=Pacman%2FTips_and_tricks&type=revision&diff=475492&oldid=475482] -- [[User:Rdeckard|Rdeckard]] ([[User_talk:Rdeckard|talk]]) 00:13, 30 April 2017 (UTC)<br />
<br />
::Actually, I undid my change since I think that first change is more accurate (mentioning {{ic|/var/cache/pacman/pkg}} and ancestors), so I went back to that but explicitly mentioned {{ic|/var}} as an example. -- [[User:Rdeckard|Rdeckard]] ([[User_talk:Rdeckard|talk]]) 01:11, 30 April 2017 (UTC)<br />
<br />
: Thanks for the background information. I was not aware of the bug report and now clearly understand why you altered the section the way you did. I hope the [https://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=475548&oldid=475495 recent change] is sufficient for you. Since every misbehaving program might leave a system unbootable if it plays a role in the boot process, it should be unnecessary to add this redundant information. However the problem you described is still severe and I hope you agree that the recent edits made to the article do the topic justice. Thanks for clarifying the topic and adding this to the article and sorry for reverting your edits at first. -- [[User:Edh|Edh]] ([[User talk:Edh|talk]]) 21:07, 30 April 2017 (UTC)<br />
<br />
== local repository database extension/compression recomendation ==<br />
<br />
If you opt to not compress a pacman database, the files database can become very large, 10x larger than a gzipped one in my case, which cause issues when trying to update the local pacman files db (pacman -Fy) since apparently there is a max (expected) size. Should we include a warning about uncompressed databases?<br />
<br />
{{unsigned|00:35, 29 January 2019|JoshH100}}<br />
<br />
== Use a new nginx.conf for [[Pacman/Tips_and_tricks#Dynamic_reverse_proxy_cache_using_nginx|Dynamic reverse proxy cache using nginx]] ==<br />
<br />
I propose to replace the [https://gist.github.com/anonymous/97ec4148f643de925e433bed3dc7ee7d current nginx.conf] with an [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/master/nginx.conf improved nginx.conf] and update the section. The new config doesn't make the upstream servers directly available on the network and it allows having mirrors with different relative paths to package files. It also removes directives that are not needed and has some other minor cleanups. I've been using a similar config for a few months now without any problems, so I believe it should be fine. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 16:05, 28 February 2019 (UTC)<br />
<br />
:What do you mean by "The new config doesn't make the upstream servers directly available on the network"? -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 20:54, 28 February 2019 (UTC)<br />
<br />
:: In the new config the server blocks for the upstream mirrors are set to listen to 127.0.0.1:800X. Only the computer that is running the nginx cache can send requests to 127.0.0.1. Other computers on the network can't. The current config exposes the upstream mirrors to the network, a nmap scan will show the 8080 port of the cache as open and the ports 8001, 8002, 8003 of the upstream mirrors as open. One can browse to cache.domain.example:8002 and have direct access to whatever package mirror website is used by the cache bypassing the cache config order and locations. The upstream mirrors don't need to be available to the entire network for the cache to work; they only need to be available to the computer that is hosting the nginx cache. I believe ports should not be left open on the network if they don't have to be open. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 08:37, 1 March 2019 (UTC)<br />
<br />
: I have written a draft for the section update on my [[User:Noctavian|user page]]. I made some small changes to the config file since last week, added comments and mirror examples and turned off IPv6 address resolution to prevent some errors that can happen sometimes. Suggestions.are welcome. I haven't seen objections to my proposal, so I'm going to wait a few more days for feedback and then update the section on the main page with my draft and the new nginx.conf file if that's ok. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 11:43, 8 March 2019 (UTC)<br />
<br />
::Feel free to go ahead. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 17:11, 16 March 2019 (UTC)<br />
<br />
== Definition of virtual package ==<br />
Regarding undoing revision 624446. I have found such a term in [[PKGBUILD]] article. I was specifically thinking about opencl-driver virtual package, but I did not mentioned that, to be more generic. [[User:Lahwaacz|Lahwaacz]], what do you think? Maybe it is better to introduce ''virtual package'' term in the [[Pacman]] article, and reapply my edit? [[User:Ashark|Ashark]] ([[User talk:Ashark|talk]]) 20:54, 9 July 2020 (UTC)<br />
<br />
:Yes, it should have a proper definition instead of "definition" by example. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 06:57, 10 July 2020 (UTC)<br />
<br />
::Please, take a look: [[Pacman#Virtual_packages]]. [[User:Ashark|Ashark]] ([[User talk:Ashark|talk]]) 15:12, 11 July 2020 (UTC)<br />
<br />
:::It's a good start, thanks. I made a small change: [https://wiki.archlinux.org/index.php?title=Pacman&diff=624921&oldid=624800] -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 17:46, 12 July 2020 (UTC)<br />
<br />
== Warning about listing changed backup files ==<br />
<br />
I understand it's obvious the command doesn't track files not owned by any packages. But the introduction to that part is about backup up your /etc folder. Hence it is innacurate that this command suffice to list the modified files in /etc, because you (or a package from the AUR) might have created some new files. Shouldn't there be a mention about this inaccuracy ?<br />
[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]]) 18:44, 23 October 2020 (UTC)<br />
<br />
:No, it is about [[PKGBUILD#backup|backup files specified in the PKGBUILD]]. It is obvious from the definition that such files must be tracked by pacman. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 18:55, 23 October 2020 (UTC)<br />
<br />
::So it should be considered a bug if those folders are not in the PKGBUILD backup ? There seems to be some important packages that do not list those files correctly (for example systemd doesn't list /etc/systemd/system, or pacman /etc/pacman.d/hooks). This seems like a risky method to backup your /etc folder.<br />
::[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]]) 19:28, 23 October 2020 (UTC)<br />
<br />
:::It's not a bug - the {{ic|backup}} array makes sense only for files owned by the package. You are completely missing the point of the {{ic|backup}} field - it has nothing to do with [[System backup]]. Please read [[PKGBUILD#backup]]. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 19:34, 23 October 2020 (UTC)<br />
<br />
::::Ok, I think I better understand what the backup in the PKGBUILD means. But when the part begins with `If you want to back up your system configuration files`, shouldn't it be expected that the part explains a method to backup your system configuration files, not just the ones specified in the backup fields of the PKGBUILD ?<br />
::::[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]])<br />
<br />
:::::I've added an accuracy flag, maybe somebody can clear up the introduction. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 07:55, 25 October 2020 (UTC)<br />
<br />
::::::How about the following sentence : If you want to transfer to another installation your system configuration files or keep a copy of them, you could grab all files in {{ic|/etc/}} but most of the time only the files that you have changed are of interest. Pacman names them [[Pacnew and Pacsave files#Package backup files|backup files]] and the modified ones can be viewed with the following command : [[User:Erus Iluvatar|Erus Iluvatar]] ([[User talk:Erus Iluvatar|talk]]) 11:16, 17 December 2021 (UTC)<br />
<br />
== Parallel download natively supported in pacman version 6 ==<br />
That should be definetly mentioned somewhere in the performance section. So powerpill doesn't really seem necessary anymore. Maybe it still does some things different/better, but I'd still mention that you can get parallel downloads even without it now.<br />
{{Unsigned|08:57, 1 June 2021 (UTC)|Elimik31}}<br />
<br />
== Remove uninstalled packages from the cache with paccache ==<br />
<br />
Just figured the following out, noting here since I'm not sure it's worth noting in the page itself: {{ic|paccache}} won't remove uninstalled packages, even with {{ic|-u}}, unless {{ic|-k}} is given a value lower than the number of instances in cache. In particular, oneshot AUR experiments won't be removed without {{ic|-k0}}.<br />
[[User:Gesh|Gesh]] ([[User talk:Gesh|talk]]) 01:23, 1 November 2021 (UTC)<br />
<br />
:The {{ic|-u}} flag just adds all installed packages to the blacklist. So to remove uninstalled packages from the cache and nothing else, use {{ic|-uk0}}. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 06:28, 1 November 2021 (UTC)<br />
<br />
== Additional options needed for mounting overlay of remote pacman pkg cache ==<br />
<br />
"The factual accuracy of this article or section is disputed. Reason: Why is -o index=off -o metacopy=off needed? Is -o redirect_dir=off needed only for this use-case? If not, it should be explained on the overlay filesystem page too."<br />
<br />
My reply: I'm not an expert on use of sshfs for mounting remote filesystems. To me, using sshfs is much simpler than going to all the trouble of setting up the chosen box to serve up {{ic|/var/cache/pacman/pkg}} over, e.g., NFS. I wonder if using fuse.sshfs leads to this problem? That being said, my setup is bog standard, all boxes using ''linux'' kernel from Arch, connected to the same LAN switch with normal IPv4 addressing, e.g., 192.168.1.100, and so on.<br />
<br />
Without using any additional options the problem I encounter is an unusual but familiar one, namely:<br />
$ ls /tmp/pacman_pkg/ > /dev/null<br />
ls: reading directory '/tmp/pacman_pkg/': Stale file handle<br />
<br />
This is obviously unworkable. The minimal options I am able to succeed with are: {{ic|1=-o redirect_dir=off -o index=off}}. Prior to the 5.17 kernel series, I needed {{ic|1=-o index=off -o metacopy=off}} but in my use something then changed which requires {{ic|1=-o redirect_dir=off}} (and {{ic|1=-o metacopy=off}} now comes along for free). Without these options... "Stale file handle".<br />
<br />
I am eliminating additional options from the generic commands even though I expect that anyone who tries to run them as listed will fail due to the file handle reporting as stale. I will add a tip note suggesting these options if problems are encountered. HTH :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 16:21, 27 April 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=727763Pacman/Tips and tricks2022-04-27T16:38:00Z<p>Cmsigler: Editing overlay remote pkg cache method, simplifying and adding tip for stale file handle errors</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[de:Pacman-Tipps]]<br />
[[es:Pacman (Español)/Tips and tricks]]<br />
[[fa:Pacman tips]]<br />
[[fr:Pacman (Français)/Tips and tricks]]<br />
[[ja:Pacman ヒント]]<br />
[[pt:Pacman (Português)/Tips and tricks]]<br />
[[ru:Pacman (Русский)/Tips and tricks]]<br />
[[zh-hans:Pacman (简体中文)/Tips and tricks]]<br />
{{Related articles start}}<br />
{{Related|Mirrors}}<br />
{{Related|Creating packages}}<br />
{{Related articles end}}<br />
For general methods to improve the flexibility of the provided tips or ''pacman'' itself, see [[Core utilities]] and [[Bash]].<br />
<br />
== Maintenance ==<br />
<br />
{{Expansion|{{ic|1=Usage=}} introduced with ''pacman'' 4.2, see [http://allanmcrae.com/2014/12/pacman-4-2-released/]}}<br />
<br />
{{Note|Instead of using ''comm'' (which requires sorted input with ''sort'') in the sections below, you may also use {{ic|grep -Fxf}} or {{ic|grep -Fxvf}}.}}<br />
<br />
See also [[System maintenance]].<br />
<br />
=== Listing packages ===<br />
<br />
==== With version ====<br />
<br />
You may want to get the list of installed packages with their version, which is useful when reporting bugs or discussing installed packages.<br />
<br />
* List all explicitly installed packages: {{ic|pacman -Qe}}.<br />
* List all packages in the [[package group]] named {{ic|''group''}}: {{ic|pacman -Sg ''group''}}<br />
* List all foreign packages (typically manually downloaded and installed or packages removed from the repositories): {{ic|pacman -Qm}}.<br />
* List all native packages (installed from the sync database): {{ic|pacman -Qn}}.<br />
* List all explicitly installed native packages (i.e. present in the sync database) that are not direct or optional dependencies: {{ic|pacman -Qent}}.<br />
* List packages by regex: {{ic|pacman -Qs ''regex''}}.<br />
* List packages by regex with custom output format (needs {{Pkg|expac}}): {{ic|expac -s "%-30n %v" ''regex''}}.<br />
<br />
==== With size ====<br />
<br />
Figuring out which packages are largest can be useful when trying to free space on your hard drive. There are two options here: get the size of individual packages, or get the size of packages and their dependencies.<br />
<br />
===== Individual packages =====<br />
<br />
The following command will list all installed packages and their individual sizes:<br />
<br />
$ LC_ALL=C pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -h<br />
<br />
===== Packages and dependencies =====<br />
<br />
To list package sizes with their dependencies,<br />
<br />
* Install {{Pkg|expac}} and run {{ic|expac -H M '%m\t%n' {{!}} sort -h}}.<br />
* Run {{Pkg|pacgraph}} with the {{ic|-c}} option.<br />
<br />
To list the download size of several packages (leave {{ic|''packages''}} blank to list all packages):<br />
<br />
$ expac -S -H M '%k\t%n' ''packages''<br />
<br />
To list explicitly installed packages not in the [[meta package]] {{Pkg|base}} nor [[package group]] {{Grp|base-devel}} with size and description:<br />
<br />
$ expac -H M "%011m\t%-20n\t%10d" $(comm -23 <(pacman -Qqen | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)) | sort -n<br />
<br />
To list the packages marked for upgrade with their download size<br />
<br />
$ expac -S -H M '%k\t%n' $(pacman -Qqu) | sort -sh<br />
<br />
==== By date ====<br />
<br />
To list the 20 last installed packages with {{Pkg|expac}}, run:<br />
<br />
$ expac --timefmt='%Y-%m-%d %T' '%l\t%n' | sort | tail -n 20<br />
<br />
or, with seconds since the epoch (1970-01-01 UTC):<br />
<br />
$ expac --timefmt=%s '%l\t%n' | sort -n | tail -n 20<br />
<br />
==== Not in a specified group, repository or meta package ====<br />
<br />
{{Note|To get a list of packages installed as dependencies but no longer required by any installed package, see [[#Removing unused packages (orphans)]].<br />
}}<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} [[meta package]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <(expac -l '\n' '%E' base | sort)<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} meta package or {{Grp|base-devel}} [[package group]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
List all installed packages unrequired by other packages, and which are not in the {{Pkg|base}} meta package or {{Grp|base-devel}} package group:<br />
<br />
$ comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; echo base; } | sort -u)<br />
<br />
As above, but with descriptions:<br />
<br />
$ expac -H M '%-20n\t%10d' $(comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; echo base; } | sort -u))<br />
<br />
List all installed packages that are ''not'' in the specified repository ''repo_name''<br />
<br />
$ comm -23 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all installed packages that are in the ''repo_name'' repository:<br />
<br />
$ comm -12 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all packages on the Arch Linux ISO that are not in the {{Pkg|base}} meta package:<br />
<br />
<nowiki>$ comm -23 <(curl https://gitlab.archlinux.org/archlinux/archiso/-/raw/master/configs/releng/packages.x86_64) <(expac -l '\n' '%E' base | sort)</nowiki><br />
<br />
{{Tip|Alternatively, use {{ic|combine}} (instead of {{ic|comm}}) from the {{Pkg|moreutils}} package which has a syntax that is easier to remember. See {{man|1|combine}}.}}<br />
<br />
==== Development packages ====<br />
<br />
To list all development/unstable packages, run:<br />
<br />
$ pacman -Qq | grep -Ee '-(bzr|cvs|darcs|git|hg|svn)$'<br />
<br />
=== Browsing packages ===<br />
<br />
To browse all installed packages with an instant preview of each package:<br />
<br />
$ pacman -Qq | fzf --preview 'pacman -Qil {}' --layout=reverse --bind 'enter:execute(pacman -Qil {} | less)'<br />
<br />
This uses [[fzf]] to present a two-pane view listing all packages with package info shown on the right.<br />
<br />
Enter letters to filter the list of packages; use arrow keys (or {{ic|Ctrl-j}}/{{ic|Ctrl-k}}) to navigate; press {{ic|Enter}} to see package info under ''less''.<br />
<br />
To browse all packages currently known to ''pacman'' (both installed and not yet installed) in a similar way, using fzf, use:<br />
<br />
$ pacman -Slq | fzf --preview 'pacman -Si {}' --layout=reverse<br />
<br />
The navigational keybindings are the same, although {{ic|Enter}} will not work in the same way.<br />
<br />
=== Listing files owned by a package with size ===<br />
<br />
This one might come in handy if you have found that a specific package uses a huge amount of space and you want to find out which files make up the most of that.<br />
<br />
$ pacman -Qlq ''package'' | grep -v '/$' | xargs -r du -h | sort -h<br />
<br />
=== Identify files not owned by any package ===<br />
<br />
If your system has stray files not owned by any package (a common case if you do not [[Enhance system stability#Use the package manager to install software|use the package manager to install software]]), you may want to find such files in order to clean them up.<br />
<br />
One method is to use {{ic|pacreport --unowned-files}} as the root user from {{Pkg|pacutils}} which will list unowned files among other details.<br />
<br />
Another is to list all files of interest and check them against ''pacman'':<br />
<br />
# find /etc /usr /opt | LC_ALL=C pacman -Qqo - 2>&1 >&- >/dev/null | cut -d ' ' -f 5-<br />
<br />
{{Tip|The {{Pkg|lostfiles}} script performs similar steps, but also includes an extensive blacklist to remove common false positives from the output.}}<br />
<br />
=== Tracking unowned files created by packages ===<br />
<br />
Most systems will slowly collect several [http://ftp.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html#S3-RPM-INSIDE-FLIST-GHOST-DIRECTIVE ghost] files such as state files, logs, indexes, etc. through the course of usual operation.<br />
<br />
{{ic|pacreport}} from {{Pkg|pacutils}} can be used to track these files and their associations via {{ic|/etc/pacreport.conf}} (see {{man|1|pacreport|FILES}}).<br />
<br />
An example may look something like this (abridged):<br />
<br />
{{hc|/etc/pacreport.conf|2=<br />
[Options]<br />
IgnoreUnowned = usr/share/applications/mimeinfo.cache<br />
<br />
[PkgIgnoreUnowned]<br />
alsa-utils = var/lib/alsa/asound.state<br />
bluez = var/lib/bluetooth<br />
ca-certificates = etc/ca-certificates/trust-source/*<br />
dbus = var/lib/dbus/machine-id<br />
glibc = etc/ld.so.cache<br />
grub = boot/grub/*<br />
linux = boot/initramfs-linux.img<br />
pacman = var/lib/pacman/local<br />
update-mime-database = usr/share/mime/magic<br />
}}<br />
<br />
Then, when using {{ic|pacreport --unowned-files}} as the root user, any unowned files will be listed if the associated package is no longer installed (or if any new files have been created).<br />
<br />
Additionally, [https://github.com/CyberShadow/aconfmgr aconfmgr] ({{AUR|aconfmgr-git}}) allows tracking modified and orphaned files using a configuration script.<br />
<br />
=== Removing unused packages (orphans) ===<br />
<br />
For recursively removing orphans and their configuration files:<br />
<br />
# pacman -Qtdq | pacman -Rns -<br />
<br />
If no orphans were found, the output is {{ic|error: argument '-' specified with empty stdin}}. This is expected as no arguments were passed to {{ic|pacman -Rns}}.<br />
<br />
{{Note|The arguments {{ic|-Qt}} list only true orphans. To include packages which are ''optionally'' required by another package, pass the {{ic|-t}} flag twice (''i.e.'', {{ic|-Qtt}}).}}<br />
<br />
=== Removing everything but essential packages ===<br />
<br />
If it is ever necessary to remove all packages except the essentials packages, one method is to set the installation reason of the non-essential ones as dependency and then remove all unnecessary dependencies.<br />
<br />
First, for all the packages "explicitly installed", change their installation reason to "installed as a dependency":<br />
<br />
# pacman -D --asdeps $(pacman -Qqe)<br />
<br />
Then, change the installation reason to "explicitly installed" of only the essential packages, those you '''do not''' want to remove, in order to avoid targeting them:<br />
<br />
# pacman -D --asexplicit base linux linux-firmware<br />
<br />
{{Note|<br />
* Additional packages can be added to the above command in order to avoid being removed. See [[Installation guide#Install essential packages]] for more info on other packages that may be necessary for a fully functional base system.<br />
* This will also select the bootloader's package for removal. The system should still be bootable, but the boot parameters might not be changeable without it.<br />
}}<br />
<br />
Finally, follow the instructions in [[#Removing unused packages (orphans)]] to remove all packages that are "installed as a dependency".<br />
<br />
=== Getting the dependencies list of several packages ===<br />
<br />
Dependencies are alphabetically sorted and doubles are removed.<br />
<br />
{{Note|To only show the tree of local installed packages, use {{ic|pacman -Qi}}.}}<br />
<br />
$ LC_ALL=C pacman -Si ''packages'' | awk -F'[:<=>]' '/^Depends/ {print $2}' | xargs -n1 | sort -u<br />
<br />
Alternatively, with {{Pkg|expac}}: <br />
<br />
$ expac -l '\n' %E -S ''packages'' | sort -u<br />
<br />
=== Listing changed backup files ===<br />
<br />
{{Accuracy|What is the connection of this section to [[System backup]]? Listing modified "backup files" does not show files which are not tracked by ''pacman''.|section=Warning about listing changed backup files}}<br />
<br />
If you want to back up your system configuration files, you could copy all files in {{ic|/etc/}} but usually you are only interested in the files that you have changed. Modified [[Pacnew and Pacsave files#Package backup files|backup files]] can be viewed with the following command:<br />
<br />
# pacman -Qii | awk '/^MODIFIED/ {print $2}'<br />
<br />
Running this command with root permissions will ensure that files readable only by root (such as {{ic|/etc/sudoers}}) are included in the output.<br />
<br />
{{Tip|See [[#Listing all changed files from packages]] to list all changed files ''pacman'' knows about, not only backup files.}}<br />
<br />
=== Back up the pacman database ===<br />
<br />
The following command can be used to back up the local ''pacman'' database:<br />
<br />
$ tar -cjf pacman_database.tar.bz2 /var/lib/pacman/local<br />
<br />
Store the backup ''pacman'' database file on one or more offline media, such as a USB stick, external hard drive, or CD-R.<br />
<br />
The database can be restored by moving the {{ic|pacman_database.tar.bz2}} file into the {{ic|/}} directory and executing the following command:<br />
<br />
# tar -xjvf pacman_database.tar.bz2<br />
<br />
{{Note|If the ''pacman'' database files are corrupted, and there is no backup file available, there exists some hope of rebuilding the ''pacman'' database. Consult [[#Restore pacman's local database]].}}<br />
<br />
{{Tip|The {{AUR|pakbak-git}} package provides a script and a [[systemd]] service to automate the task. Configuration is possible in {{ic|/etc/pakbak.conf}}.}}<br />
<br />
=== Check changelogs easily ===<br />
<br />
When maintainers update packages, commits are often commented in a useful fashion. Users can quickly check these from the command line by installing {{AUR|pacolog}}. This utility lists recent commit messages for packages from the official repositories or the AUR, by using {{ic|pacolog ''package''}}.<br />
<br />
== Installation and recovery ==<br />
<br />
Alternative ways of getting and restoring packages.<br />
<br />
=== Installing packages from a CD/DVD or USB stick ===<br />
<br />
{{Merge|#Custom local repository|Use as an example and avoid duplication}}<br />
<br />
To download packages, or groups of packages:<br />
<br />
# cd ~/Packages<br />
# pacman -Syw --cachedir . base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Pacman, which will reference the host installation by default, will not properly resolve and download existing dependencies. In cases where all packages and dependencies are wanted, it is recommended to create a temporary blank DB and reference it with {{ic|--dbpath}}:<br />
<br />
# mkdir /tmp/blankdb<br />
# pacman -Syw --cachedir . --dbpath /tmp/blankdb base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Then you can burn the "Packages" folder to a CD/DVD or transfer it to a USB stick, external HDD, etc.<br />
<br />
To install:<br />
<br />
'''1.''' Mount the media:<br />
<br />
# mkdir /mnt/repo<br />
# mount /dev/sr0 /mnt/repo #For a CD/DVD.<br />
# mount /dev/sdxY /mnt/repo #For a USB stick.<br />
<br />
'''2.''' Edit {{ic|pacman.conf}} and add this repository ''before'' the other ones (e.g. extra, core, etc.). This is important. Do not just uncomment the one on the bottom. This way it ensures that the files from the CD/DVD/USB take precedence over those in the standard repositories:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
[custom]<br />
SigLevel = PackageRequired<br />
Server = file:///mnt/repo/Packages}}<br />
<br />
'''3.''' Finally, synchronize the ''pacman'' database to be able to use the new repository:<br />
<br />
# pacman -Syu<br />
<br />
=== Custom local repository ===<br />
<br />
Use the ''repo-add'' script included with ''pacman'' to generate a database for a personal repository. Use {{ic|repo-add --help}} for more details on its usage. <br />
A package database is a tar file, optionally compressed. Valid extensions are ''.db'' or ''.files'' followed by an archive extension of ''.tar'', ''.tar.gz'', ''.tar.bz2'', ''.tar.xz'', ''.tar.zst'', or ''.tar.Z''. The file does not need to exist, but all parent directories must exist.<br />
<br />
To add a new package to the database, or to replace the old version of an existing package in the database, run:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/package-1.0-1-x86_64.pkg.tar.xz''<br />
<br />
The database and the packages do not need to be in the same directory when using ''repo-add'', but keep in mind that when using ''pacman'' with that database, they should be together. Storing all the built packages to be included in the repository in one directory also allows to use shell glob expansion to add or update multiple packages at once:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/*.pkg.tar.xz''<br />
<br />
{{Warning|''repo-add'' adds the entries into the database in the same order as passed on the command line. If multiple versions of the same package are involved, care must be taken to ensure that the correct version is added last. In particular, note that lexical order used by the shell depends on the locale and differs from the {{man|8|vercmp}} ordering used by ''pacman''.}}<br />
<br />
If you are looking to support multiple architectures then precautions should be taken to prevent errors from occurring. Each architecture should have its own directory tree:<br />
<br />
{{hc|$ tree ~/customrepo/ {{!}} sed "s/$(uname -m)/''arch''/g"|<br />
/home/archie/customrepo/<br />
└── ''arch''<br />
├── customrepo.db -> customrepo.db.tar.xz<br />
├── customrepo.db.tar.xz<br />
├── customrepo.files -> customrepo.files.tar.xz<br />
├── customrepo.files.tar.xz<br />
└── personal-website-git-b99cce0-1-''arch''.pkg.tar.xz<br />
<br />
1 directory, 5 files<br />
}}<br />
<br />
The ''repo-add'' executable checks if the package is appropriate. If this is not the case you will be running into error messages similar to this:<br />
<br />
==> ERROR: '/home/archie/customrepo/''arch''/foo-''arch''.pkg.tar.xz' does not have a valid database archive extension.<br />
<br />
''repo-remove'' is used to remove packages from the package database, except that only package names are specified on the command line.<br />
<br />
$ repo-remove ''/path/to/repo.db.tar.gz pkgname''<br />
<br />
Once the local repository database has been created, add the repository to {{ic|pacman.conf}} for each system that is to use the repository. An example of a custom repository is in {{ic|pacman.conf}}. The repository's name is the database filename with the file extension omitted. In the case of the example above the repository's name would simply be ''repo''. Reference the repository's location using a {{ic|file://}} URL, or via FTP using ftp://localhost/path/to/directory.<br />
<br />
If willing, add the custom repository to the [[Unofficial user repositories|list of unofficial user repositories]], so that the community can benefit from it.<br />
<br />
=== Network shared pacman cache ===<br />
<br />
{{Merge|Package_Proxy_Cache|Same topic}}<br />
If you happen to run several Arch boxes on your LAN, you can share packages so that you can greatly decrease your download times. Keep in mind you should not share between different architectures (i.e. i686 and x86_64) or you will run into problems.<br />
<br />
==== Read-only cache ====<br />
<br />
{{Note|1=If pacman fails to download 3 packages from the server, it will use another mirror instead. See https://bbs.archlinux.org/viewtopic.php?id=268066.}}<br />
<br />
If you are looking for a quick solution, you can simply run a [https://gist.github.com/willurd/5720255 basic temporary webserver] which other computers can use as their first mirror.<br />
<br />
First of all, make pacman databases available into the folder you will serve:<br />
<br />
# ln -s /var/lib/pacman/sync/*.db /var/cache/pacman/pkg/<br />
<br />
Then start serving this folder. For example, with [[Python]] [https://docs.python.org/3/library/http.server.html#http-server-cli http.server] module:<br />
$ python -m http.server -d /var/cache/pacman/pkg/<br />
<br />
{{Tip|By default, Python {{ic|http.server}} listens on port {{ic|8000}}. To use another port, simply add it as an argument:<br />
<br />
$ python -m http.server -d /var/cache/pacman/pkg/ 8080<br />
}}<br />
<br />
Then [[textedit|edit]] {{ic|/etc/pacman.d/mirrorlist}} on each client machine to add this server as the top entry:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|2=<br />
Server = http://''server-ip'':''port''<br />
...<br />
}}<br />
<br />
{{Warning|Do '''not''' append {{ic|/repos/$repo/os/$arch}} to this custom server like for other entries, as this hierarchy does not exist and therefore queries will fail.}}<br />
<br />
If looking for a more standalone solution, {{Pkg|darkhttpd}} offers a very minimal webserver. Replace the previous {{ic|python}} command with e.g.:<br />
<br />
$ sudo -u http darkhttpd /var/cache/pacman/pkg --no-server-id<br />
<br />
You could also run darkhttpd as a ''systemd'' service for convenience: see [[Systemd#Writing unit files]].<br />
<br />
{{Pkg|miniserve}}, a web small server written in Rust, can also be used:<br />
<br />
$ miniserve /var/cache/pacman/pkg<br />
<br />
Then edit {{ic|/etc/pacman.d/mirrorlist}} as above with the first url miniserve is available at.<br />
<br />
If you are already running a web server for some other purpose, you might wish to reuse that as your local repository server instead. For example, if you already serve a site with [[nginx]], you can add an ''nginx'' server block listening on port 8080:<br />
<br />
{{hc|/etc/nginx/nginx.conf|<br />
server {<br />
listen 8080;<br />
root /var/cache/pacman/pkg;<br />
server_name myarchrepo.localdomain;<br />
try_files $uri $uri/;<br />
}<br />
}}<br />
<br />
Remember to [[restart]] {{ic|nginx.service}} after making this change.<br />
<br />
{{Tip|Whichever web server you use, make sure the firewall configuration (if any) allows the configured port to be reached by the desired traffic, and disallows any undesired traffic. See [[Security#Network and firewalls]].}}<br />
<br />
==== Overlay mount of read-only cache ====<br />
<br />
It is possible to use one machine on a local network as a read-only package cache by [[Overlay_filesystem|overlay mounting]] its {{ic|/var/cache/pacman/pkg}} directory. Such a configuration is advantageous if this server has installed on it a reasonably comprehensive selection of up-to-date packages which are also used by other boxes. This is useful for maintaining a number of machines at the end of a low bandwidth upstream connection.<br />
<br />
As an example, to use this method:<br />
<br />
# mkdir /tmp/remote_pkg /mnt/workdir_pkg /tmp/pacman_pkg<br />
# sshfs ''remote_username''@''remote_pkgcache_addr'':/var/cache/pacman/pkg /tmp/remote_pkg -C<br />
# mount -t overlay overlay -o lowerdir=/tmp/remote_pkg,upperdir=/var/cache/pacman/pkg,workdir=/mnt/workdir_pkg /tmp/pacman_pkg<br />
<br />
{{Note|The working directory must be an empty directory on the same mounted device as the upper directory. See [[Overlay filesystem#Usage]].}}<br />
<br />
{{Tip|1=If listing the {{ic|/tmp/pacman_pkg}} overlay directory gives errors, e.g., "Stale file handle", try overlay mounting with options {{ic|1=-o redirect_dir=off -o index=off}} }}<br />
<br />
After this, run ''pacman'' using the option {{ic|--cachedir /tmp/pacman_pkg}}, e.g.:<br />
<br />
# pacman -Syu --cachedir /tmp/pacman_pkg<br />
<br />
==== Distributed read-only cache ====<br />
<br />
There are Arch-specific tools for automatically discovering other computers on your network offering a package cache. Try {{Pkg|pacredir}}, [[pacserve]], {{AUR|pkgdistcache}}, or {{AUR|paclan}}. pkgdistcache uses Avahi instead of plain UDP which may work better in certain home networks that route instead of bridge between WiFi and Ethernet.<br />
<br />
Historically, there was [https://bbs.archlinux.org/viewtopic.php?id=64391 PkgD] and [https://github.com/toofishes/multipkg multipkg], but they are no longer maintained.<br />
<br />
==== Read-write cache ====<br />
<br />
In order to share packages between multiple computers, simply share {{ic|/var/cache/pacman/}} using any network-based mount protocol. This section shows how to use [[SSHFS]] to share a package cache plus the related library-directories between multiple computers on the same local network. Keep in mind that a network shared cache can be slow depending on the file-system choice, among other factors.<br />
<br />
First, install any network-supporting filesystem packages: {{Pkg|sshfs}}, {{Pkg|curlftpfs}}, {{Pkg|samba}} or {{Pkg|nfs-utils}}.<br />
<br />
{{Tip|<br />
* To use ''sshfs'', consider reading [[Using SSH Keys]].<br />
* By default, ''smbfs'' does not serve filenames that contain colons, which results in the client downloading the offending package afresh. To prevent this, use the {{ic|mapchars}} mount option on the client.<br />
}}<br />
<br />
Then, to share the actual packages, mount {{ic|/var/cache/pacman/pkg}} from the server to {{ic|/var/cache/pacman/pkg}} on every client machine.<br />
<br />
{{Warning|Do not make {{ic|/var/cache/pacman/pkg}} or any of its ancestors (e.g., {{ic|/var}}) a symlink. Pacman expects these to be directories. When ''pacman'' re-installs or upgrades itself, it will remove the symlinks and create empty directories instead. However during the transaction ''pacman'' relies on some files residing there, hence breaking the update process. Refer to {{Bug|50298}} for further details.}}<br />
<br />
==== two-way with rsync ====<br />
<br />
Another approach in a local environment is [[rsync]]. Choose a server for caching and enable the [[Rsync#As a daemon|rsync daemon]]. On clients synchronize two-way with this share via the rsync protocol. Filenames that contain colons are no problem for the rsync protocol.<br />
<br />
Draft example for a client, using {{ic|uname -m}} within the share name ensures an architecture-dependent sync:<br />
# rsync rsync://server/share_$(uname -m)/ /var/cache/pacman/pkg/ ...<br />
# pacman ...<br />
# paccache ...<br />
# rsync /var/cache/pacman/pkg/ rsync://server/share_$(uname -m)/ ...<br />
<br />
==== Dynamic reverse proxy cache using nginx ====<br />
<br />
[[nginx]] can be used to proxy package requests to official upstream mirrors and cache the results to the local disk. All subsequent requests for that package will be served directly from the local cache, minimizing the amount of internet traffic needed to update a large number of computers. <br />
<br />
In this example, the cache server will run at {{ic|<nowiki>http://cache.domain.example:8080/</nowiki>}} and store the packages in {{ic|/srv/http/pacman-cache/}}. <br />
<br />
Install [[nginx]] on the computer that is going to host the cache. Create the directory for the cache and adjust the permissions so nginx can write files to it:<br />
<br />
# mkdir /srv/http/pacman-cache<br />
# chown http:http /srv/http/pacman-cache<br />
<br />
Use the [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/c54eca4776ff162ab492117b80be4df95880d0e2/nginx.conf nginx pacman cache config] as a starting point for {{ic|/etc/nginx/nginx.conf}}. Check that the {{ic|resolver}} directive works for your needs. In the upstream server blocks, configure the {{ic|proxy_pass}} directives with addresses of official mirrors, see examples in the configuration file about the expected format. Once you are satisfied with the configuration file [[Nginx#Running|start and enable nginx]].<br />
<br />
In order to use the cache each Arch Linux computer (including the one hosting the cache) must have the following line at the top of the {{ic|mirrorlist}} file:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|<nowiki><br />
Server = http://cache.domain.example:8080/$repo/os/$arch<br />
...<br />
</nowiki>}}<br />
<br />
{{Note| You will need to create a method to clear old packages, as the cache directory will continue to grow over time. {{ic|paccache}} (which is provided by {{Pkg|pacman-contrib}}) can be used to automate this using retention criteria of your choosing. For example, {{ic|find /srv/http/pacman-cache/ -type d -exec paccache -v -r -k 2 -c {} \;}} will keep the last 2 versions of packages in your cache directory.}}<br />
<br />
==== Pacoloco proxy cache server ====<br />
<br />
[https://github.com/anatol/pacoloco Pacoloco] is an easy-to-use proxy cache server for ''pacman'' repositories. It also allows [https://github.com/anatol/pacoloco/commit/048b09956b0d8ef71c0ed1f804fd332d9ab5e3c8 automatic prefetching] of the cached packages.<br />
<br />
It can be installed as {{Pkg|pacoloco}}. Open the configuration file and add ''pacman'' mirrors:<br />
<br />
{{hc|/etc/pacoloco.yaml|<nowiki><br />
port: 9129<br />
repos:<br />
mycopy:<br />
urls:<br />
- http://mirror.lty.me/archlinux<br />
- http://mirrors.kernel.org/archlinux<br />
</nowiki>}}<br />
<br />
[[Restart]] {{ic|pacoloco.service}} and the proxy repository will be available at {{ic|http://''myserver'':9129/repo/mycopy}}.<br />
<br />
==== Flexo proxy cache server ====<br />
<br />
[https://github.com/nroi/flexo Flexo] is yet another proxy cache server for ''pacman'' repositories. Flexo is available as {{AUR|flexo-git}}. Once installed, [[start]] the {{ic|flexo.service}} unit.<br />
<br />
Flexo runs on port {{ic|7878}} by default. Enter {{ic|1=Server = http://''myserver'':7878/$repo/os/$arch}} to the top of your {{ic|/etc/pacman.d/mirrorlist}} so that ''pacman'' downloads packages via Flexo.<br />
<br />
==== Synchronize pacman package cache using synchronization programs ====<br />
<br />
Use [[Syncthing]] or [[Resilio Sync]] to synchronize the ''pacman'' cache folders (i.e. {{ic|/var/cache/pacman/pkg}}).<br />
<br />
==== Preventing unwanted cache purges ====<br />
<br />
By default, {{ic|pacman -Sc}} removes package tarballs from the cache that correspond to packages that are not installed on the machine the command was issued on. Because ''pacman'' cannot predict what packages are installed on all machines that share the cache, it will end up deleting files that should not be.<br />
<br />
To clean up the cache so that only ''outdated'' tarballs are deleted, add this entry in the {{ic|[options]}} section of {{ic|/etc/pacman.conf}}:<br />
<br />
CleanMethod = KeepCurrent<br />
<br />
=== Recreate a package from the file system ===<br />
<br />
To recreate a package from the file system, use {{AUR|fakepkg}}. Files from the system are taken as they are, hence any modifications will be present in the assembled package. Distributing the recreated package is therefore discouraged; see [[ABS]] and [[Arch Linux Archive]] for alternatives.<br />
<br />
=== List of installed packages ===<br />
<br />
Keeping a list of all explicitly installed packages can be useful to backup a system or quicken the installation of a new one:<br />
<br />
$ pacman -Qqe > pkglist.txt<br />
<br />
{{Note|<br />
* With option {{ic|-t}}, the packages already required by other explicitly installed packages are not mentioned. If reinstalling from this list they will be installed but as dependencies only.<br />
* With option {{ic|-n}}, foreign packages (e.g. from [[AUR]]) would be omitted from the list.<br />
* Use {{ic|comm -13 <(pacman -Qqdt {{!}} sort) <(pacman -Qqdtt {{!}} sort) > optdeplist.txt}} to also create a list of the installed optional dependencies which can be reinstalled with {{ic|--asdeps}}.<br />
* Use {{ic|pacman -Qqem > foreignpkglist.txt}} to create the list of AUR and other foreign packages that have been explicitly installed.}}<br />
<br />
To keep an up-to-date list of explicitly installed packages (e.g. in combination with a versioned {{ic|/etc/}}), you can set up a [[Pacman#Hooks|hook]]. Example:<br />
<br />
[Trigger]<br />
Operation = Install<br />
Operation = Remove<br />
Type = Package<br />
Target = *<br />
<br />
[Action]<br />
When = PostTransaction<br />
Exec = /bin/sh -c '/usr/bin/pacman -Qqe > /etc/pkglist.txt'<br />
<br />
=== Install packages from a list ===<br />
<br />
To install packages from a previously saved list of packages, while not reinstalling previously installed packages that are already up-to-date, run:<br />
<br />
# pacman -S --needed - < pkglist.txt<br />
<br />
However, it is likely foreign packages such as from the AUR or installed locally are present in the list. To filter out from the list the foreign packages, the previous command line can be enriched as follows:<br />
<br />
# pacman -S --needed $(comm -12 <(pacman -Slq | sort) <(sort pkglist.txt))<br />
<br />
Eventually, to make sure the installed packages of your system match the list and remove all the packages that are not mentioned in it:<br />
<br />
# pacman -Rsu $(comm -23 <(pacman -Qq | sort) <(sort pkglist.txt))<br />
<br />
{{Tip|These tasks can be automated. See {{AUR|bacpac}}, {{AUR|packup}}, {{AUR|pacmanity}}, and {{AUR|pug}} for examples.}}<br />
<br />
=== Listing all changed files from packages ===<br />
<br />
If you are suspecting file corruption (e.g. by software/hardware failure), but are unsure if files were corrupted, you might want to compare with the hash sums in the packages. This can be done with {{Pkg|pacutils}}:<br />
<br />
# paccheck --md5sum --quiet<br />
<br />
For recovery of the database see [[#Restore pacman's local database]]. The {{ic|mtree}} files can also be [[#Viewing a single file inside a .pkg file|extracted as {{ic|.MTREE}} from the respective package files]].<br />
<br />
{{Note|This should '''not''' be used as is when suspecting malicious changes! In this case security precautions such as using a live medium and an independent source for the hash sums are advised.}}<br />
<br />
=== Reinstalling all packages ===<br />
<br />
To reinstall all native packages, use:<br />
<br />
# pacman -Qqn | pacman -S -<br />
<br />
Foreign (AUR) packages must be reinstalled separately; you can list them with {{ic|pacman -Qqm}}.<br />
<br />
Pacman preserves the [[installation reason]] by default.<br />
<br />
{{Warning|To force all packages to be overwritten, use {{ic|1=--overwrite=*}}, though this should be an absolute last resort. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== Restore pacman's local database ===<br />
<br />
See [[pacman/Restore local database]].<br />
<br />
=== Recovering a USB key from existing install ===<br />
<br />
If you have Arch installed on a USB key and manage to mess it up (e.g. removing it while it is still being written to), then it is possible to re-install all the packages and hopefully get it back up and working again (assuming USB key is mounted in {{ic|/newarch}})<br />
<br />
# pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman<br />
<br />
=== Viewing a single file inside a .pkg file ===<br />
<br />
For example, if you want to see the contents of {{ic|/etc/systemd/logind.conf}} supplied within the {{Pkg|systemd}} package:<br />
<br />
$ bsdtar -xOf /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz etc/systemd/logind.conf<br />
<br />
Or you can use {{Pkg|vim}} to browse the archive:<br />
<br />
$ vim /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz<br />
<br />
=== Find applications that use libraries from older packages ===<br />
<br />
Already running processes do not automatically notice changes caused by updates. Instead, they continue using old library versions. That may be undesirable, due to potential issues related to security vulnerabilities or other bugs, and version incompatibility.<br />
<br />
Processes depending on updated libraries may be found using either {{pkg|htop}}, which highlights the names of the affected programs, or with a snippet based on {{pkg|lsof}}, which also prints the names of the libraries:<br />
<br />
# lsof +c 0 | grep -w DEL | awk '1 { print $1 ": " $NF }' | sort -u<br />
<br />
This solution will only detect files, that are normally kept opened by running processes, which basically limits it to shared libraries ({{ic|.so}} files). It may miss some dependencies, like those of Java or Python applications.<br />
<br />
=== Installing only content in required languages ===<br />
<br />
Many packages attempt to install documentation and translations in several languages. Some programs are designed to remove such unnecessary files, such as {{AUR|localepurge}}, which runs after a package is installed to delete the unneeded locale files. A more direct approach is provided through the {{ic|NoExtract}} directive in {{ic|pacman.conf}}, which prevent these files from ever being installed.<br />
<br />
{{Warning|1=Some users noted that removing locales has resulted in [[Special:Permalink/460285#Dangerous NoExtract example|unintended consequences]], even under [https://bbs.archlinux.org/viewtopic.php?id=250846 Xorg].}}<br />
<br />
The example below installs English (US) files, or none at all:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
NoExtract = usr/share/help/* !usr/share/help/C/*<br />
NoExtract = usr/share/gtk-doc/html/*<br />
NoExtract = usr/share/locale/* usr/share/X11/locale/*/* usr/share/i18n/locales/* opt/google/chrome/locales/* !usr/share/X11/locale/C/*<br />
NoExtract = !*locale*/en*/* !usr/share/*locale*/locale.*<br />
NoExtract = !usr/share/*locales/en_?? !usr/share/*locales/i18n* !usr/share/*locales/iso*<br />
NoExtract = usr/share/i18n/charmaps/* !usr/share/i18n/charmaps/UTF-8.gz<br />
NoExtract = !usr/share/*locales/trans*<br />
NoExtract = usr/share/man/* !usr/share/man/man*<br />
NoExtract = usr/share/vim/vim*/lang/*<br />
NoExtract = usr/lib/libreoffice/help/en-US/*<br />
NoExtract = usr/share/kbd/locale/*<br />
NoExtract = usr/share/*/translations/*.qm usr/share/*/nls/*.qm usr/share/qt/translations/*.pak !*/en-US.pak # Qt apps<br />
NoExtract = usr/share/*/locales/*.pak opt/*/locales/*.pak usr/lib/*/locales/*.pak !*/en-US.pak # Electron apps<br />
NoExtract = opt/onlyoffice/desktopeditors/dictionaries/* !opt/onlyoffice/desktopeditors/dictionaries/en_US/*<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/locale/* !*/en.json<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/resources/help/* !*/help/en/*<br />
NoExtract = opt/onlyoffice/desktopeditors/converter/empty/*/*<br />
NoExtract = usr/share/ibus/dicts/emoji-*.dict !usr/share/ibus/dicts/emoji-en.dict<br />
}}<br />
<br />
=== Installing packages on bad connection ===<br />
<br />
When trying to install a package from a bad connection (e.g. a train using a cell phone), use the {{ic|--disable-download-timeout}} option to lessen the chance of receiving errors such as: <br />
<br />
error: failed retrieving file […] Operation too slow. Less than 1 bytes/sec transferred the last 10 seconds<br />
<br />
or<br />
<br />
error: failed retrieving file […] Operation timed out after 10014 milliseconds with 0 out of 0 bytes received<br />
<br />
== Performance ==<br />
<br />
=== Download speeds ===<br />
<br />
When downloading packages ''pacman'' uses the mirrors in the order they are in {{ic|/etc/pacman.d/mirrorlist}}. The mirror which is at the top of the list by default however may not be the fastest for you. To select a faster mirror, see [[Mirrors]].<br />
<br />
Pacman's speed in downloading packages can also be improved by using a different application to download packages, instead of ''pacman''<nowiki/>'s built-in file downloader, or by [[pacman#Enabling parallel downloads|enabling parallel downloads]].<br />
<br />
In all cases, make sure you have the latest ''pacman'' before doing any modifications.<br />
<br />
# pacman -Syu<br />
<br />
==== Powerpill ====<br />
<br />
[[Powerpill]] is a ''pacman'' wrapper that uses parallel and segmented downloading to try to speed up downloads for ''pacman''.<br />
<br />
==== wget ====<br />
<br />
This is also very handy if you need more powerful proxy settings than ''pacman''<nowiki/>'s built-in capabilities. <br />
<br />
To use {{ic|wget}}, first [[install]] the {{Pkg|wget}} package then modify {{ic|/etc/pacman.conf}} by uncommenting the following line in the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/wget --passive-ftp --show-progress -c -q -N %u<br />
<br />
Instead of uncommenting the {{ic|wget}} parameters in {{ic|/etc/pacman.conf}}, you can also modify the {{ic|wget}} configuration file directly (the system-wide file is {{ic|/etc/wgetrc}}, per user files are {{ic|$HOME/.wgetrc}}).<br />
<br />
==== aria2 ====<br />
<br />
[[aria2]] is a lightweight download utility with support for resumable and segmented HTTP/HTTPS and FTP downloads. aria2 allows for multiple and simultaneous HTTP/HTTPS and FTP connections to an Arch mirror, which should result in an increase in download speeds for both file and package retrieval.<br />
<br />
{{Note|Using aria2c in ''pacman''<nowiki/>'s XferCommand will '''not''' result in parallel downloads of multiple packages. Pacman invokes the XferCommand with a single package at a time and waits for it to complete before invoking the next. To download multiple packages in parallel, see [[Powerpill]].}}<br />
<br />
Install {{Pkg|aria2}}, then edit {{ic|/etc/pacman.conf}} by adding the following line to the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u<br />
<br />
{{Tip|1=[https://bbs.archlinux.org/viewtopic.php?pid=1491879#p1491879 This alternative configuration for using pacman with aria2] tries to simplify configuration and adds more configuration options.}}<br />
<br />
See {{man|1|aria2c|OPTIONS}} for used aria2c options.<br />
<br />
* {{ic|-d, --dir}}: The directory to store the downloaded file(s) as specified by ''pacman''.<br />
* {{ic|-o, --out}}: The output file name(s) of the downloaded file(s). <br />
* {{ic|%o}}: Variable which represents the local filename(s) as specified by ''pacman''.<br />
* {{ic|%u}}: Variable which represents the download URL as specified by ''pacman''.<br />
<br />
==== Other applications ====<br />
<br />
There are other downloading applications that you can use with ''pacman''. Here they are, and their associated XferCommand settings:<br />
<br />
* {{ic|snarf}}: {{ic|1=XferCommand = /usr/bin/snarf -N %u}}<br />
* {{ic|lftp}}: {{ic|1=XferCommand = /usr/bin/lftp -c pget %u}}<br />
* {{ic|axel}}: {{ic|1=XferCommand = /usr/bin/axel -n 2 -v -a -o %o %u}}<br />
* {{ic|hget}}: {{ic|1=XferCommand = /usr/bin/hget %u -n 2 -skip-tls false}} (please read the [https://github.com/huydx/hget documentation on the Github project page] for more info)<br />
* {{ic|saldl}}: {{ic|1=XferCommand = /usr/bin/saldl -c6 -l4 -s2m -o %o %u}} (please read the [https://saldl.github.io documentation on the project page] for more info)<br />
<br />
== Utilities ==<br />
<br />
* {{App|Lostfiles|Script that identifies files not owned by any package.|https://github.com/graysky2/lostfiles|{{Pkg|lostfiles}}}}<br />
* {{App|pacutils|Helper library for libalpm based programs.|https://github.com/andrewgregory/pacutils|{{Pkg|pacutils}}}}<br />
* {{App|[[pkgfile]]|Tool that finds what package owns a file.|https://github.com/falconindy/pkgfile|{{Pkg|pkgfile}}}}<br />
* {{App|pkgtools|Collection of scripts for Arch Linux packages.|https://github.com/Daenyth/pkgtools|{{AUR|pkgtools}}}}<br />
* {{App|pkgtop|Interactive package manager and resource monitor designed for the GNU/Linux.|https://github.com/orhun/pkgtop|{{AUR|pkgtop-git}}}}<br />
* {{App|[[Powerpill]]|Uses parallel and segmented downloading through [[aria2]] and [[Reflector]] to try to speed up downloads for ''pacman''.|https://xyne.dev/projects/powerpill/|{{AUR|powerpill}}}}<br />
* {{App|repoctl|Tool to help manage local repositories.|https://github.com/cassava/repoctl|{{AUR|repoctl}}}}<br />
* {{App|repose|An Arch Linux repository building tool.|https://github.com/vodik/repose|{{Pkg|repose}}}}<br />
* {{App|[[Snapper#Wrapping_pacman_transactions_in_snapshots|snap-pac]]|Make ''pacman'' automatically use snapper to create pre/post snapshots like openSUSE's YaST.|https://github.com/wesbarnett/snap-pac|{{Pkg|snap-pac}}}}<br />
* {{App|vrms-arch|A virtual Richard M. Stallman to tell you which non-free packages are installed.|https://github.com/orospakr/vrms-arch|{{AUR|vrms-arch-git}}}}<br />
<br />
=== Graphical ===<br />
<br />
{{Warning|PackageKit opens up system permissions by default, and is otherwise not recommended for general usage. See {{Bug|50459}} and {{Bug|57943}}.}}<br />
<br />
* {{App|Apper|Qt 5 application and package manager using PackageKit written in C++. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://userbase.kde.org/Apper|{{Pkg|apper}}}}<br />
* {{App|Deepin App Store|Third party app store for DDE built with DTK, using PackageKit. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://github.com/dekzi/dde-store|{{Pkg|deepin-store}}}}<br />
* {{App|Discover|Qt 5 application manager using PackageKit written in C++/QML. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://userbase.kde.org/Discover|{{Pkg|discover}}}}<br />
* {{App|GNOME PackageKit|GTK 3 package manager using PackageKit written in C.|https://freedesktop.org/software/PackageKit/|{{Pkg|gnome-packagekit}}}}<br />
* {{App|GNOME Software|GTK 3 application manager using PackageKit written in C. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://wiki.gnome.org/Apps/Software|{{Pkg|gnome-software}}}}<br />
* {{App|pcurses|Curses TUI ''pacman'' wrapper written in C++.|https://github.com/schuay/pcurses|{{AUR|pcurses}}}}<br />
* {{App|tkPacman|Tk pacman wrapper written in Tcl.|https://sourceforge.net/projects/tkpacman|{{AUR|tkpacman}}}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Talk:Pacman/Tips_and_tricks&diff=727760Talk:Pacman/Tips and tricks2022-04-27T16:22:33Z<p>Cmsigler: Answer for factual accuracy tag on remote overlay method</p>
<hr />
<div>== Leading slash ==<br />
<br />
[[Pacman/Tips_and_tricks#aria2]] doesn't work without leading slash, i.e. {{ic|-d /}} turning file names to {{ic|//var/cache/...}}. The article mentions this, but it doesn't mention why. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:28, 16 October 2015 (UTC)<br />
<br />
:You would have to go [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&diff=32104&oldid=30674 way] [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&diff=next&oldid=115292 back] to track this. It seems to have worked without {{ic|-d /}} even in 2006: [https://wiki.archlinux.org/index.php?title=Faster_Pacman_Downloads&oldid=15627], [https://wiki.archlinux.org/index.php?title=Improve_pacman_performance&oldid=17759]. <s>I guess that simply nobody asked the right question...</s> -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 12:30, 16 October 2015 (UTC)<br />
:Oops, it does ''not'' work without {{ic|-d /}}. Then the problem must be on aria's side, which expects a file name for the {{ic|-o}} option, which is then catenated with {{ic|-d}} into the full path. Assuming that {{ic|-d}} defaults to the cwd, {{ic|/var/cache/}} would appear twice in the result. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 12:43, 16 October 2015 (UTC)<br />
<br />
== pacman cache ==<br />
<br />
I still think we should warn people not to symlink /var or anything under it. It leaves the whole system unusable because if the cache disappears during a pacman transaction, you're left with missing /usr/lib libraries, and nothing works, including pacman itself. This is a serious enough problem that it can take hours to figure out how to recover. If the wiki had mentioned this problem it would have saved me a lot of time and effort, and I'm not the only one who has run in to this. It is not, however, considered a bug. See https://bugs.archlinux.org/task/50298. [[User:JimRees|JimRees]] ([[User talk:JimRees|talk]]) 23:15, 29 April 2017 (UTC)<br />
<br />
:This revisions says that: [https://wiki.archlinux.org/index.php?title=Pacman%2FTips_and_tricks&type=revision&diff=475454&oldid=475438]. But to make it more clear: [https://wiki.archlinux.org/index.php?title=Pacman%2FTips_and_tricks&type=revision&diff=475492&oldid=475482] -- [[User:Rdeckard|Rdeckard]] ([[User_talk:Rdeckard|talk]]) 00:13, 30 April 2017 (UTC)<br />
<br />
::Actually, I undid my change since I think that first change is more accurate (mentioning {{ic|/var/cache/pacman/pkg}} and ancestors), so I went back to that but explicitly mentioned {{ic|/var}} as an example. -- [[User:Rdeckard|Rdeckard]] ([[User_talk:Rdeckard|talk]]) 01:11, 30 April 2017 (UTC)<br />
<br />
: Thanks for the background information. I was not aware of the bug report and now clearly understand why you altered the section the way you did. I hope the [https://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=475548&oldid=475495 recent change] is sufficient for you. Since every misbehaving program might leave a system unbootable if it plays a role in the boot process, it should be unnecessary to add this redundant information. However the problem you described is still severe and I hope you agree that the recent edits made to the article do the topic justice. Thanks for clarifying the topic and adding this to the article and sorry for reverting your edits at first. -- [[User:Edh|Edh]] ([[User talk:Edh|talk]]) 21:07, 30 April 2017 (UTC)<br />
<br />
== local repository database extension/compression recomendation ==<br />
<br />
If you opt to not compress a pacman database, the files database can become very large, 10x larger than a gzipped one in my case, which cause issues when trying to update the local pacman files db (pacman -Fy) since apparently there is a max (expected) size. Should we include a warning about uncompressed databases?<br />
<br />
{{unsigned|00:35, 29 January 2019|JoshH100}}<br />
<br />
== Use a new nginx.conf for [[Pacman/Tips_and_tricks#Dynamic_reverse_proxy_cache_using_nginx|Dynamic reverse proxy cache using nginx]] ==<br />
<br />
I propose to replace the [https://gist.github.com/anonymous/97ec4148f643de925e433bed3dc7ee7d current nginx.conf] with an [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/master/nginx.conf improved nginx.conf] and update the section. The new config doesn't make the upstream servers directly available on the network and it allows having mirrors with different relative paths to package files. It also removes directives that are not needed and has some other minor cleanups. I've been using a similar config for a few months now without any problems, so I believe it should be fine. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 16:05, 28 February 2019 (UTC)<br />
<br />
:What do you mean by "The new config doesn't make the upstream servers directly available on the network"? -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 20:54, 28 February 2019 (UTC)<br />
<br />
:: In the new config the server blocks for the upstream mirrors are set to listen to 127.0.0.1:800X. Only the computer that is running the nginx cache can send requests to 127.0.0.1. Other computers on the network can't. The current config exposes the upstream mirrors to the network, a nmap scan will show the 8080 port of the cache as open and the ports 8001, 8002, 8003 of the upstream mirrors as open. One can browse to cache.domain.example:8002 and have direct access to whatever package mirror website is used by the cache bypassing the cache config order and locations. The upstream mirrors don't need to be available to the entire network for the cache to work; they only need to be available to the computer that is hosting the nginx cache. I believe ports should not be left open on the network if they don't have to be open. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 08:37, 1 March 2019 (UTC)<br />
<br />
: I have written a draft for the section update on my [[User:Noctavian|user page]]. I made some small changes to the config file since last week, added comments and mirror examples and turned off IPv6 address resolution to prevent some errors that can happen sometimes. Suggestions.are welcome. I haven't seen objections to my proposal, so I'm going to wait a few more days for feedback and then update the section on the main page with my draft and the new nginx.conf file if that's ok. [[User:Noctavian|Noctavian]] ([[User talk:Noctavian|talk]]) 11:43, 8 March 2019 (UTC)<br />
<br />
::Feel free to go ahead. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 17:11, 16 March 2019 (UTC)<br />
<br />
== Definition of virtual package ==<br />
Regarding undoing revision 624446. I have found such a term in [[PKGBUILD]] article. I was specifically thinking about opencl-driver virtual package, but I did not mentioned that, to be more generic. [[User:Lahwaacz|Lahwaacz]], what do you think? Maybe it is better to introduce ''virtual package'' term in the [[Pacman]] article, and reapply my edit? [[User:Ashark|Ashark]] ([[User talk:Ashark|talk]]) 20:54, 9 July 2020 (UTC)<br />
<br />
:Yes, it should have a proper definition instead of "definition" by example. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 06:57, 10 July 2020 (UTC)<br />
<br />
::Please, take a look: [[Pacman#Virtual_packages]]. [[User:Ashark|Ashark]] ([[User talk:Ashark|talk]]) 15:12, 11 July 2020 (UTC)<br />
<br />
:::It's a good start, thanks. I made a small change: [https://wiki.archlinux.org/index.php?title=Pacman&diff=624921&oldid=624800] -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 17:46, 12 July 2020 (UTC)<br />
<br />
== Warning about listing changed backup files ==<br />
<br />
I understand it's obvious the command doesn't track files not owned by any packages. But the introduction to that part is about backup up your /etc folder. Hence it is innacurate that this command suffice to list the modified files in /etc, because you (or a package from the AUR) might have created some new files. Shouldn't there be a mention about this inaccuracy ?<br />
[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]]) 18:44, 23 October 2020 (UTC)<br />
<br />
:No, it is about [[PKGBUILD#backup|backup files specified in the PKGBUILD]]. It is obvious from the definition that such files must be tracked by pacman. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 18:55, 23 October 2020 (UTC)<br />
<br />
::So it should be considered a bug if those folders are not in the PKGBUILD backup ? There seems to be some important packages that do not list those files correctly (for example systemd doesn't list /etc/systemd/system, or pacman /etc/pacman.d/hooks). This seems like a risky method to backup your /etc folder.<br />
::[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]]) 19:28, 23 October 2020 (UTC)<br />
<br />
:::It's not a bug - the {{ic|backup}} array makes sense only for files owned by the package. You are completely missing the point of the {{ic|backup}} field - it has nothing to do with [[System backup]]. Please read [[PKGBUILD#backup]]. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 19:34, 23 October 2020 (UTC)<br />
<br />
::::Ok, I think I better understand what the backup in the PKGBUILD means. But when the part begins with `If you want to back up your system configuration files`, shouldn't it be expected that the part explains a method to backup your system configuration files, not just the ones specified in the backup fields of the PKGBUILD ?<br />
::::[[User:Apollo22|Apollo22]] ([[User talk:Apollo22|talk]])<br />
<br />
:::::I've added an accuracy flag, maybe somebody can clear up the introduction. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 07:55, 25 October 2020 (UTC)<br />
<br />
::::::How about the following sentence : If you want to transfer to another installation your system configuration files or keep a copy of them, you could grab all files in {{ic|/etc/}} but most of the time only the files that you have changed are of interest. Pacman names them [[Pacnew and Pacsave files#Package backup files|backup files]] and the modified ones can be viewed with the following command : [[User:Erus Iluvatar|Erus Iluvatar]] ([[User talk:Erus Iluvatar|talk]]) 11:16, 17 December 2021 (UTC)<br />
<br />
== Parallel download natively supported in pacman version 6 ==<br />
That should be definetly mentioned somewhere in the performance section. So powerpill doesn't really seem necessary anymore. Maybe it still does some things different/better, but I'd still mention that you can get parallel downloads even without it now.<br />
{{Unsigned|08:57, 1 June 2021 (UTC)|Elimik31}}<br />
<br />
== Remove uninstalled packages from the cache with paccache ==<br />
<br />
Just figured the following out, noting here since I'm not sure it's worth noting in the page itself: {{ic|paccache}} won't remove uninstalled packages, even with {{ic|-u}}, unless {{ic|-k}} is given a value lower than the number of instances in cache. In particular, oneshot AUR experiments won't be removed without {{ic|-k0}}.<br />
[[User:Gesh|Gesh]] ([[User talk:Gesh|talk]]) 01:23, 1 November 2021 (UTC)<br />
<br />
:The {{ic|-u}} flag just adds all installed packages to the blacklist. So to remove uninstalled packages from the cache and nothing else, use {{ic|-uk0}}. — [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 06:28, 1 November 2021 (UTC)<br />
<br />
== Additional options needed for mounting overlay of remote pacman pkg cache ==<br />
<br />
"The factual accuracy of this article or section is disputed. Reason: Why is -o index=off -o metacopy=off needed? Is -o redirect_dir=off needed only for this use-case? If not, it should be explained on the overlay filesystem page too."<br />
<br />
My reply: I'm not an expert on use of sshfs for mounting remote filesystems. To me, using sshfs is much simpler than going to all the trouble of setting up the chosen box to serve up {{ic|/var/cache/pacman/pkg}} over, e.g., NFS. I wonder if using fuse.sshfs leads to this problem? That being said, my setup is bog standard, all boxes using {{ic|linux}} kernel from Arch, connected to the same LAN switch with normal IPv4 addressing, e.g., 192.168.1.100, and so on.<br />
<br />
Without using any additional options the problem I encounter is an unusual but familiar one, namely:<br />
$ ls /tmp/pacman_pkg/ > /dev/null<br />
ls: reading directory '/tmp/pacman_pkg/': Stale file handle<br />
<br />
This is obviously unworkable. The minimal options I am able to succeed with are: {{ic|1=-o redirect_dir=off -o index=off}}. Prior to the 5.17 kernel series, I needed {{ic|1=-o index=off -o metacopy=off}} but in my use something then changed which requires {{ic|1=-o redirect_dir=off}} (and {{ic|1=-o metacopy=off}} now comes along for free). Without these options... "Stale file handle".<br />
<br />
I am eliminating additional options from the generic commands even though I expect that anyone who tries to run them as listed will fail due to the file handle reporting as stale. I will add a tip note suggesting these options if problems are encountered. HTH :) [[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 16:21, 27 April 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=725573Pacman/Tips and tricks2022-04-06T12:14:02Z<p>Cmsigler: Update overlay mount with needed options, "index=off", "metacopy=off"; in addition, for me starting with kernel 5.17.1 "redirect_dir=off" is needed, without it, listing the overlay filesystem returns 'Stale file handle' :\</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[de:Pacman-Tipps]]<br />
[[es:Pacman (Español)/Tips and tricks]]<br />
[[fa:Pacman tips]]<br />
[[fr:Pacman (Français)/Tips and tricks]]<br />
[[ja:Pacman ヒント]]<br />
[[pt:Pacman (Português)/Tips and tricks]]<br />
[[ru:Pacman (Русский)/Tips and tricks]]<br />
[[zh-hans:Pacman (简体中文)/Tips and tricks]]<br />
{{Related articles start}}<br />
{{Related|Mirrors}}<br />
{{Related|Creating packages}}<br />
{{Related articles end}}<br />
For general methods to improve the flexibility of the provided tips or ''pacman'' itself, see [[Core utilities]] and [[Bash]].<br />
<br />
== Maintenance ==<br />
<br />
{{Expansion|{{ic|1=Usage=}} introduced with ''pacman'' 4.2, see [http://allanmcrae.com/2014/12/pacman-4-2-released/]}}<br />
<br />
{{Note|Instead of using ''comm'' (which requires sorted input with ''sort'') in the sections below, you may also use {{ic|grep -Fxf}} or {{ic|grep -Fxvf}}.}}<br />
<br />
See also [[System maintenance]].<br />
<br />
=== Listing packages ===<br />
<br />
==== With version ====<br />
<br />
You may want to get the list of installed packages with their version, which is useful when reporting bugs or discussing installed packages.<br />
<br />
* List all explicitly installed packages: {{ic|pacman -Qe}}.<br />
* List all packages in the [[package group]] named {{ic|''group''}}: {{ic|pacman -Sg ''group''}}<br />
* List all foreign packages (typically manually downloaded and installed or packages removed from the repositories): {{ic|pacman -Qm}}.<br />
* List all native packages (installed from the sync database): {{ic|pacman -Qn}}.<br />
* List all explicitly installed native packages (i.e. present in the sync database) that are not direct or optional dependencies: {{ic|pacman -Qent}}.<br />
* List packages by regex: {{ic|pacman -Qs ''regex''}}.<br />
* List packages by regex with custom output format (needs {{Pkg|expac}}): {{ic|expac -s "%-30n %v" ''regex''}}.<br />
<br />
==== With size ====<br />
<br />
Figuring out which packages are largest can be useful when trying to free space on your hard drive. There are two options here: get the size of individual packages, or get the size of packages and their dependencies.<br />
<br />
===== Individual packages =====<br />
<br />
The following command will list all installed packages and their individual sizes:<br />
<br />
$ LC_ALL=C pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -h<br />
<br />
===== Packages and dependencies =====<br />
<br />
To list package sizes with their dependencies,<br />
<br />
* Install {{Pkg|expac}} and run {{ic|expac -H M '%m\t%n' {{!}} sort -h}}.<br />
* Run {{Pkg|pacgraph}} with the {{ic|-c}} option.<br />
<br />
To list the download size of several packages (leave {{ic|''packages''}} blank to list all packages):<br />
<br />
$ expac -S -H M '%k\t%n' ''packages''<br />
<br />
To list explicitly installed packages not in the [[meta package]] {{Pkg|base}} nor [[package group]] {{Grp|base-devel}} with size and description:<br />
<br />
$ expac -H M "%011m\t%-20n\t%10d" $(comm -23 <(pacman -Qqen | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort | uniq)) | sort -n<br />
<br />
To list the packages marked for upgrade with their download size<br />
<br />
$ expac -S -H M '%k\t%n' $(pacman -Qqu) | sort -sh<br />
<br />
==== By date ====<br />
<br />
To list the 20 last installed packages with {{Pkg|expac}}, run:<br />
<br />
$ expac --timefmt='%Y-%m-%d %T' '%l\t%n' | sort | tail -n 20<br />
<br />
or, with seconds since the epoch (1970-01-01 UTC):<br />
<br />
$ expac --timefmt=%s '%l\t%n' | sort -n | tail -n 20<br />
<br />
==== Not in a specified group, repository or meta package ====<br />
<br />
{{Note|To get a list of packages installed as dependencies but no longer required by any installed package, see [[#Removing unused packages (orphans)]].<br />
}}<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} [[meta package]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <(expac -l '\n' '%E' base | sort)<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} meta package or {{Grp|base-devel}} [[package group]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
List all installed packages unrequired by other packages, and which are not in the {{Pkg|base}} meta package or {{Grp|base-devel}} package group:<br />
<br />
$ comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
As above, but with descriptions:<br />
<br />
$ expac -H M '%-20n\t%10d' $(comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u))<br />
<br />
List all installed packages that are ''not'' in the specified repository ''repo_name''<br />
<br />
$ comm -23 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all installed packages that are in the ''repo_name'' repository:<br />
<br />
$ comm -12 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all packages on the Arch Linux ISO that are not in the {{Pkg|base}} meta package:<br />
<br />
<nowiki>$ comm -23 <(curl https://gitlab.archlinux.org/archlinux/archiso/-/raw/master/configs/releng/packages.x86_64) <(expac -l '\n' '%E' base | sort)</nowiki><br />
<br />
{{Tip|Alternatively, use {{ic|combine}} (instead of {{ic|comm}}) from the {{Pkg|moreutils}} package which has a syntax that is easier to remember. See {{man|1|combine}}.}}<br />
<br />
==== Development packages ====<br />
<br />
To list all development/unstable packages, run:<br />
<br />
$ pacman -Qq | grep -Ee '-(bzr|cvs|darcs|git|hg|svn)$'<br />
<br />
=== Browsing packages ===<br />
<br />
To browse all installed packages with an instant preview of each package:<br />
<br />
$ pacman -Qq | fzf --preview 'pacman -Qil {}' --layout=reverse --bind 'enter:execute(pacman -Qil {} | less)'<br />
<br />
This uses [[fzf]] to present a two-pane view listing all packages with package info shown on the right.<br />
<br />
Enter letters to filter the list of packages; use arrow keys (or {{ic|Ctrl-j}}/{{ic|Ctrl-k}}) to navigate; press {{ic|Enter}} to see package info under ''less''.<br />
<br />
To browse all packages currently known to ''pacman'' (both installed and not yet installed) in a similar way, using fzf, use:<br />
<br />
$ pacman -Slq | fzf --preview 'pacman -Si {}' --layout=reverse<br />
<br />
The navigational keybindings are the same, although {{ic|Enter}} will not work in the same way.<br />
<br />
=== Listing files owned by a package with size ===<br />
<br />
This one might come in handy if you have found that a specific package uses a huge amount of space and you want to find out which files make up the most of that.<br />
<br />
$ pacman -Qlq ''package'' | grep -v '/$' | xargs -r du -h | sort -h<br />
<br />
=== Identify files not owned by any package ===<br />
<br />
If your system has stray files not owned by any package (a common case if you do not [[Enhance system stability#Use the package manager to install software|use the package manager to install software]]), you may want to find such files in order to clean them up.<br />
<br />
One method is to use {{ic|pacreport --unowned-files}} as the root user from {{Pkg|pacutils}} which will list unowned files among other details.<br />
<br />
Another is to list all files of interest and check them against ''pacman'':<br />
<br />
# find /etc /usr /opt | LC_ALL=C pacman -Qqo - 2>&1 >&- >/dev/null | cut -d ' ' -f 5-<br />
<br />
{{Tip|The {{Pkg|lostfiles}} script performs similar steps, but also includes an extensive blacklist to remove common false positives from the output.}}<br />
<br />
=== Tracking unowned files created by packages ===<br />
<br />
Most systems will slowly collect several [http://ftp.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html#S3-RPM-INSIDE-FLIST-GHOST-DIRECTIVE ghost] files such as state files, logs, indexes, etc. through the course of usual operation.<br />
<br />
{{ic|pacreport}} from {{Pkg|pacutils}} can be used to track these files and their associations via {{ic|/etc/pacreport.conf}} (see {{man|1|pacreport|FILES}}).<br />
<br />
An example may look something like this (abridged):<br />
<br />
{{hc|/etc/pacreport.conf|2=<br />
[Options]<br />
IgnoreUnowned = usr/share/applications/mimeinfo.cache<br />
<br />
[PkgIgnoreUnowned]<br />
alsa-utils = var/lib/alsa/asound.state<br />
bluez = var/lib/bluetooth<br />
ca-certificates = etc/ca-certificates/trust-source/*<br />
dbus = var/lib/dbus/machine-id<br />
glibc = etc/ld.so.cache<br />
grub = boot/grub/*<br />
linux = boot/initramfs-linux.img<br />
pacman = var/lib/pacman/local<br />
update-mime-database = usr/share/mime/magic<br />
}}<br />
<br />
Then, when using {{ic|pacreport --unowned-files}} as the root user, any unowned files will be listed if the associated package is no longer installed (or if any new files have been created).<br />
<br />
Additionally, [https://github.com/CyberShadow/aconfmgr aconfmgr] ({{AUR|aconfmgr-git}}) allows tracking modified and orphaned files using a configuration script.<br />
<br />
=== Removing unused packages (orphans) ===<br />
<br />
For recursively removing orphans and their configuration files:<br />
<br />
# pacman -Qtdq | pacman -Rns -<br />
<br />
If no orphans were found, the output is {{ic|error: argument '-' specified with empty stdin}}. This is expected as no arguments were passed to {{ic|pacman -Rns}}.<br />
<br />
{{Note|The arguments {{ic|-Qt}} list only true orphans. To include packages which are ''optionally'' required by another package, pass the {{ic|-t}} flag twice (''i.e.'', {{ic|-Qtt}}).}}<br />
<br />
=== Removing everything but essential packages ===<br />
<br />
If it is ever necessary to remove all packages except the essentials packages, one method is to set the installation reason of the non-essential ones as dependency and then remove all unnecessary dependencies.<br />
<br />
First, for all the packages "explicitly installed", change their installation reason to "installed as a dependency":<br />
<br />
# pacman -D --asdeps $(pacman -Qqe)<br />
<br />
Then, change the installation reason to "explicitly installed" of only the essential packages, those you '''do not''' want to remove, in order to avoid targeting them:<br />
<br />
# pacman -D --asexplicit base linux linux-firmware<br />
<br />
{{Note|<br />
* Additional packages can be added to the above command in order to avoid being removed. See [[Installation guide#Install essential packages]] for more info on other packages that may be necessary for a fully functional base system.<br />
* This will also select the bootloader's package for removal. The system should still be bootable, but the boot parameters might not be changeable without it.<br />
}}<br />
<br />
Finally, follow the instructions in [[#Removing unused packages (orphans)]] to remove all packages that are "installed as a dependency".<br />
<br />
=== Getting the dependencies list of several packages ===<br />
<br />
Dependencies are alphabetically sorted and doubles are removed.<br />
<br />
{{Note|To only show the tree of local installed packages, use {{ic|pacman -Qi}}.}}<br />
<br />
$ LC_ALL=C pacman -Si ''packages'' | awk -F'[:<=>]' '/^Depends/ {print $2}' | xargs -n1 | sort -u<br />
<br />
Alternatively, with {{Pkg|expac}}: <br />
<br />
$ expac -l '\n' %E -S ''packages'' | sort -u<br />
<br />
=== Listing changed backup files ===<br />
<br />
{{Accuracy|What is the connection of this section to [[System backup]]? Listing modified "backup files" does not show files which are not tracked by ''pacman''.|section=Warning about listing changed backup files}}<br />
<br />
If you want to back up your system configuration files, you could copy all files in {{ic|/etc/}} but usually you are only interested in the files that you have changed. Modified [[Pacnew and Pacsave files#Package backup files|backup files]] can be viewed with the following command:<br />
<br />
# pacman -Qii | awk '/^MODIFIED/ {print $2}'<br />
<br />
Running this command with root permissions will ensure that files readable only by root (such as {{ic|/etc/sudoers}}) are included in the output.<br />
<br />
{{Tip|See [[#Listing all changed files from packages]] to list all changed files ''pacman'' knows about, not only backup files.}}<br />
<br />
=== Back up the pacman database ===<br />
<br />
The following command can be used to back up the local ''pacman'' database:<br />
<br />
$ tar -cjf pacman_database.tar.bz2 /var/lib/pacman/local<br />
<br />
Store the backup ''pacman'' database file on one or more offline media, such as a USB stick, external hard drive, or CD-R.<br />
<br />
The database can be restored by moving the {{ic|pacman_database.tar.bz2}} file into the {{ic|/}} directory and executing the following command:<br />
<br />
# tar -xjvf pacman_database.tar.bz2<br />
<br />
{{Note|If the ''pacman'' database files are corrupted, and there is no backup file available, there exists some hope of rebuilding the ''pacman'' database. Consult [[#Restore pacman's local database]].}}<br />
<br />
{{Tip|The {{AUR|pakbak-git}} package provides a script and a [[systemd]] service to automate the task. Configuration is possible in {{ic|/etc/pakbak.conf}}.}}<br />
<br />
=== Check changelogs easily ===<br />
<br />
When maintainers update packages, commits are often commented in a useful fashion. Users can quickly check these from the command line by installing {{AUR|pacolog}}. This utility lists recent commit messages for packages from the official repositories or the AUR, by using {{ic|pacolog ''package''}}.<br />
<br />
== Installation and recovery ==<br />
<br />
Alternative ways of getting and restoring packages.<br />
<br />
=== Installing packages from a CD/DVD or USB stick ===<br />
<br />
{{Merge|#Custom local repository|Use as an example and avoid duplication}}<br />
<br />
To download packages, or groups of packages:<br />
<br />
# cd ~/Packages<br />
# pacman -Syw --cachedir . base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Pacman, which will reference the host installation by default, will not properly resolve and download existing dependencies. In cases where all packages and dependencies are wanted, it is recommended to create a temporary blank DB and reference it with {{ic|--dbpath}}:<br />
<br />
# mkdir /tmp/blankdb<br />
# pacman -Syw --cachedir . --dbpath /tmp/blankdb base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Then you can burn the "Packages" folder to a CD/DVD or transfer it to a USB stick, external HDD, etc.<br />
<br />
To install:<br />
<br />
'''1.''' Mount the media:<br />
<br />
# mkdir /mnt/repo<br />
# mount /dev/sr0 /mnt/repo #For a CD/DVD.<br />
# mount /dev/sdxY /mnt/repo #For a USB stick.<br />
<br />
'''2.''' Edit {{ic|pacman.conf}} and add this repository ''before'' the other ones (e.g. extra, core, etc.). This is important. Do not just uncomment the one on the bottom. This way it ensures that the files from the CD/DVD/USB take precedence over those in the standard repositories:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
[custom]<br />
SigLevel = PackageRequired<br />
Server = file:///mnt/repo/Packages}}<br />
<br />
'''3.''' Finally, synchronize the ''pacman'' database to be able to use the new repository:<br />
<br />
# pacman -Syu<br />
<br />
=== Custom local repository ===<br />
<br />
Use the ''repo-add'' script included with ''pacman'' to generate a database for a personal repository. Use {{ic|repo-add --help}} for more details on its usage. <br />
A package database is a tar file, optionally compressed. Valid extensions are ''.db'' or ''.files'' followed by an archive extension of ''.tar'', ''.tar.gz'', ''.tar.bz2'', ''.tar.xz'', ''.tar.zst'', or ''.tar.Z''. The file does not need to exist, but all parent directories must exist.<br />
<br />
To add a new package to the database, or to replace the old version of an existing package in the database, run:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/package-1.0-1-x86_64.pkg.tar.xz''<br />
<br />
The database and the packages do not need to be in the same directory when using ''repo-add'', but keep in mind that when using ''pacman'' with that database, they should be together. Storing all the built packages to be included in the repository in one directory also allows to use shell glob expansion to add or update multiple packages at once:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/*.pkg.tar.xz''<br />
<br />
{{Warning|''repo-add'' adds the entries into the database in the same order as passed on the command line. If multiple versions of the same package are involved, care must be taken to ensure that the correct version is added last. In particular, note that lexical order used by the shell depends on the locale and differs from the {{man|8|vercmp}} ordering used by ''pacman''.}}<br />
<br />
If you are looking to support multiple architectures then precautions should be taken to prevent errors from occurring. Each architecture should have its own directory tree:<br />
<br />
{{hc|$ tree ~/customrepo/ {{!}} sed "s/$(uname -m)/''arch''/g"|<br />
/home/archie/customrepo/<br />
└── ''arch''<br />
├── customrepo.db -> customrepo.db.tar.xz<br />
├── customrepo.db.tar.xz<br />
├── customrepo.files -> customrepo.files.tar.xz<br />
├── customrepo.files.tar.xz<br />
└── personal-website-git-b99cce0-1-''arch''.pkg.tar.xz<br />
<br />
1 directory, 5 files<br />
}}<br />
<br />
The ''repo-add'' executable checks if the package is appropriate. If this is not the case you will be running into error messages similar to this:<br />
<br />
==> ERROR: '/home/archie/customrepo/''arch''/foo-''arch''.pkg.tar.xz' does not have a valid database archive extension.<br />
<br />
''repo-remove'' is used to remove packages from the package database, except that only package names are specified on the command line.<br />
<br />
$ repo-remove ''/path/to/repo.db.tar.gz pkgname''<br />
<br />
Once the local repository database has been created, add the repository to {{ic|pacman.conf}} for each system that is to use the repository. An example of a custom repository is in {{ic|pacman.conf}}. The repository's name is the database filename with the file extension omitted. In the case of the example above the repository's name would simply be ''repo''. Reference the repository's location using a {{ic|file://}} URL, or via FTP using ftp://localhost/path/to/directory.<br />
<br />
If willing, add the custom repository to the [[Unofficial user repositories|list of unofficial user repositories]], so that the community can benefit from it.<br />
<br />
=== Network shared pacman cache ===<br />
<br />
{{Merge|Package_Proxy_Cache|Same topic}}<br />
If you happen to run several Arch boxes on your LAN, you can share packages so that you can greatly decrease your download times. Keep in mind you should not share between different architectures (i.e. i686 and x86_64) or you will run into problems.<br />
<br />
==== Read-only cache ====<br />
<br />
{{Note|1=If pacman fails to download 3 packages from the server, it will use another mirror instead. See https://bbs.archlinux.org/viewtopic.php?id=268066.}}<br />
<br />
If you are looking for a quick solution, you can simply run a [https://gist.github.com/willurd/5720255 basic temporary webserver] which other computers can use as their first mirror.<br />
<br />
First of all, make pacman databases available into the folder you will serve:<br />
<br />
# ln -s /var/lib/pacman/sync/*.db /var/cache/pacman/pkg/<br />
<br />
Then start serving this folder. For example, with [[Python]] [https://docs.python.org/3/library/http.server.html#http-server-cli http.server] module:<br />
$ python -m http.server -d /var/cache/pacman/pkg/<br />
<br />
{{Tip|By default, Python {{ic|http.server}} listens on port {{ic|8000}}. To use another port, simply add it as an argument:<br />
<br />
$ python -m http.server -d /var/cache/pacman/pkg/ 8080<br />
}}<br />
<br />
Then [[textedit|edit]] {{ic|/etc/pacman.d/mirrorlist}} on each client machine to add this server as the top entry:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|2=<br />
Server = http://''server-ip'':''port''<br />
...<br />
}}<br />
<br />
{{Warning|Do '''not''' append {{ic|/repos/$repo/os/$arch}} to this custom server like for other entries, as this hierarchy does not exist and therefore queries will fail.}}<br />
<br />
If looking for a more standalone solution, {{Pkg|darkhttpd}} offers a very minimal webserver. Replace the previous {{ic|python}} command with e.g.:<br />
<br />
$ sudo -u http darkhttpd /var/cache/pacman/pkg --no-server-id<br />
<br />
You could also run darkhttpd as a ''systemd'' service for convenience: see [[Systemd#Writing unit files]].<br />
<br />
{{Pkg|miniserve}}, a web small server written in Rust, can also be used:<br />
<br />
$ miniserve /var/cache/pacman/pkg<br />
<br />
Then edit {{ic|/etc/pacman.d/mirrorlist}} as above with the first url miniserve is available at.<br />
<br />
If you are already running a web server for some other purpose, you might wish to reuse that as your local repository server instead. For example, if you already serve a site with [[nginx]], you can add an ''nginx'' server block listening on port 8080:<br />
<br />
{{hc|/etc/nginx/nginx.conf|<br />
server {<br />
listen 8080;<br />
root /var/cache/pacman/pkg;<br />
server_name myarchrepo.localdomain;<br />
try_files $uri $uri/;<br />
}<br />
}}<br />
<br />
Remember to [[restart]] {{ic|nginx.service}} after making this change.<br />
<br />
{{Tip|Whichever web server you use, make sure the firewall configuration (if any) allows the configured port to be reached by the desired traffic, and disallows any undesired traffic. See [[Security#Network and firewalls]].}}<br />
<br />
==== Overlay mount of read-only cache ====<br />
<br />
It is possible to use one machine on a local network as a read-only package cache by [[Overlay_filesystem|overlay mounting]] its {{ic|/var/cache/pacman/pkg}} directory. Such a configuration is advantageous if this server has installed on it a reasonably comprehensive selection of up-to-date packages which are also used by other boxes. This is useful for maintaining a number of machines at the end of a low bandwidth upstream connection.<br />
<br />
As an example, to use this method:<br />
<br />
# mkdir /tmp/remote_pkg /mnt/workdir_pkg /tmp/pacman_pkg<br />
# sshfs ''remote_username''@''remote_pkgcache_addr'':/var/cache/pacman/pkg /tmp/remote_pkg -C<br />
# mount -t overlay overlay -o redirect_dir=off -o index=off -o metacopy=off -o lowerdir=/tmp/remote_pkg,upperdir=/var/cache/pacman/pkg,workdir=/mnt/workdir_pkg /tmp/pacman_pkg<br />
<br />
{{Note|The working directory must be an empty directory on the same mounted device as the upper directory. See [[Overlay filesystem#Usage]].}}<br />
<br />
After this, run ''pacman'' using the option {{ic|--cachedir /tmp/pacman_pkg}}, e.g.:<br />
<br />
# pacman -Syu --cachedir /tmp/pacman_pkg<br />
<br />
==== Distributed read-only cache ====<br />
<br />
There are Arch-specific tools for automatically discovering other computers on your network offering a package cache. Try {{Pkg|pacredir}}, [[pacserve]], {{AUR|pkgdistcache}}, or {{AUR|paclan}}. pkgdistcache uses Avahi instead of plain UDP which may work better in certain home networks that route instead of bridge between WiFi and Ethernet.<br />
<br />
Historically, there was [https://bbs.archlinux.org/viewtopic.php?id=64391 PkgD] and [https://github.com/toofishes/multipkg multipkg], but they are no longer maintained.<br />
<br />
==== Read-write cache ====<br />
<br />
In order to share packages between multiple computers, simply share {{ic|/var/cache/pacman/}} using any network-based mount protocol. This section shows how to use [[SSHFS]] to share a package cache plus the related library-directories between multiple computers on the same local network. Keep in mind that a network shared cache can be slow depending on the file-system choice, among other factors.<br />
<br />
First, install any network-supporting filesystem packages: {{Pkg|sshfs}}, {{Pkg|curlftpfs}}, {{Pkg|samba}} or {{Pkg|nfs-utils}}.<br />
<br />
{{Tip|<br />
* To use ''sshfs'', consider reading [[Using SSH Keys]].<br />
* By default, ''smbfs'' does not serve filenames that contain colons, which results in the client downloading the offending package afresh. To prevent this, use the {{ic|mapchars}} mount option on the client.<br />
}}<br />
<br />
Then, to share the actual packages, mount {{ic|/var/cache/pacman/pkg}} from the server to {{ic|/var/cache/pacman/pkg}} on every client machine.<br />
<br />
{{Warning|Do not make {{ic|/var/cache/pacman/pkg}} or any of its ancestors (e.g., {{ic|/var}}) a symlink. Pacman expects these to be directories. When ''pacman'' re-installs or upgrades itself, it will remove the symlinks and create empty directories instead. However during the transaction ''pacman'' relies on some files residing there, hence breaking the update process. Refer to {{Bug|50298}} for further details.}}<br />
<br />
==== two-way with rsync ====<br />
<br />
Another approach in a local environment is [[rsync]]. Choose a server for caching and enable the [[Rsync#As a daemon|rsync daemon]]. On clients synchronize two-way with this share via the rsync protocol. Filenames that contain colons are no problem for the rsync protocol.<br />
<br />
Draft example for a client, using {{ic|uname -m}} within the share name ensures an architecture-dependent sync:<br />
# rsync rsync://server/share_$(uname -m)/ /var/cache/pacman/pkg/ ...<br />
# pacman ...<br />
# paccache ...<br />
# rsync /var/cache/pacman/pkg/ rsync://server/share_$(uname -m)/ ...<br />
<br />
==== Dynamic reverse proxy cache using nginx ====<br />
<br />
[[nginx]] can be used to proxy package requests to official upstream mirrors and cache the results to the local disk. All subsequent requests for that package will be served directly from the local cache, minimizing the amount of internet traffic needed to update a large number of computers. <br />
<br />
In this example, the cache server will run at {{ic|<nowiki>http://cache.domain.example:8080/</nowiki>}} and store the packages in {{ic|/srv/http/pacman-cache/}}. <br />
<br />
Install [[nginx]] on the computer that is going to host the cache. Create the directory for the cache and adjust the permissions so nginx can write files to it:<br />
<br />
# mkdir /srv/http/pacman-cache<br />
# chown http:http /srv/http/pacman-cache<br />
<br />
Use the [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/c54eca4776ff162ab492117b80be4df95880d0e2/nginx.conf nginx pacman cache config] as a starting point for {{ic|/etc/nginx/nginx.conf}}. Check that the {{ic|resolver}} directive works for your needs. In the upstream server blocks, configure the {{ic|proxy_pass}} directives with addresses of official mirrors, see examples in the configuration file about the expected format. Once you are satisfied with the configuration file [[Nginx#Running|start and enable nginx]].<br />
<br />
In order to use the cache each Arch Linux computer (including the one hosting the cache) must have the following line at the top of the {{ic|mirrorlist}} file:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|<nowiki><br />
Server = http://cache.domain.example:8080/$repo/os/$arch<br />
...<br />
</nowiki>}}<br />
<br />
{{Note| You will need to create a method to clear old packages, as the cache directory will continue to grow over time. {{ic|paccache}} (which is provided by {{Pkg|pacman-contrib}}) can be used to automate this using retention criteria of your choosing. For example, {{ic|find /srv/http/pacman-cache/ -type d -exec paccache -v -r -k 2 -c {} \;}} will keep the last 2 versions of packages in your cache directory.}}<br />
<br />
==== Pacoloco proxy cache server ====<br />
<br />
[https://github.com/anatol/pacoloco Pacoloco] is an easy-to-use proxy cache server for ''pacman'' repositories. It also allows [https://github.com/anatol/pacoloco/commit/048b09956b0d8ef71c0ed1f804fd332d9ab5e3c8 automatic prefetching] of the cached packages.<br />
<br />
It can be installed as {{Pkg|pacoloco}}. Open the configuration file and add ''pacman'' mirrors:<br />
<br />
{{hc|/etc/pacoloco.yaml|<nowiki><br />
port: 9129<br />
repos:<br />
mycopy:<br />
urls:<br />
- http://mirror.lty.me/archlinux<br />
- http://mirrors.kernel.org/archlinux<br />
</nowiki>}}<br />
<br />
[[Restart]] {{ic|pacoloco.service}} and the proxy repository will be available at {{ic|http://''myserver'':9129/repo/mycopy}}.<br />
<br />
==== Flexo proxy cache server ====<br />
<br />
[https://github.com/nroi/flexo Flexo] is yet another proxy cache server for ''pacman'' repositories. Flexo is available as {{AUR|flexo-git}}. Once installed, [[start]] the {{ic|flexo.service}} unit.<br />
<br />
Flexo runs on port {{ic|7878}} by default. Enter {{ic|1=Server = http://''myserver'':7878/$repo/os/$arch}} to the top of your {{ic|/etc/pacman.d/mirrorlist}} so that ''pacman'' downloads packages via Flexo.<br />
<br />
==== Synchronize pacman package cache using synchronization programs ====<br />
<br />
Use [[Syncthing]] or [[Resilio Sync]] to synchronize the ''pacman'' cache folders (i.e. {{ic|/var/cache/pacman/pkg}}).<br />
<br />
==== Preventing unwanted cache purges ====<br />
<br />
By default, {{ic|pacman -Sc}} removes package tarballs from the cache that correspond to packages that are not installed on the machine the command was issued on. Because ''pacman'' cannot predict what packages are installed on all machines that share the cache, it will end up deleting files that should not be.<br />
<br />
To clean up the cache so that only ''outdated'' tarballs are deleted, add this entry in the {{ic|[options]}} section of {{ic|/etc/pacman.conf}}:<br />
<br />
CleanMethod = KeepCurrent<br />
<br />
=== Recreate a package from the file system ===<br />
<br />
To recreate a package from the file system, use {{AUR|fakepkg}}. Files from the system are taken as they are, hence any modifications will be present in the assembled package. Distributing the recreated package is therefore discouraged; see [[ABS]] and [[Arch Linux Archive]] for alternatives.<br />
<br />
=== List of installed packages ===<br />
<br />
Keeping a list of all explicitly installed packages can be useful to backup a system or quicken the installation of a new one:<br />
<br />
$ pacman -Qqe > pkglist.txt<br />
<br />
{{Note|<br />
* With option {{ic|-t}}, the packages already required by other explicitly installed packages are not mentioned. If reinstalling from this list they will be installed but as dependencies only.<br />
* With option {{ic|-n}}, foreign packages (e.g. from [[AUR]]) would be omitted from the list.<br />
* Use {{ic|comm -13 <(pacman -Qqdt {{!}} sort) <(pacman -Qqdtt {{!}} sort) > optdeplist.txt}} to also create a list of the installed optional dependencies which can be reinstalled with {{ic|--asdeps}}.<br />
* Use {{ic|pacman -Qqem > foreignpkglist.txt}} to create the list of AUR and other foreign packages that have been explicitly installed.}}<br />
<br />
To keep an up-to-date list of explicitly installed packages (e.g. in combination with a versioned {{ic|/etc/}}), you can set up a [[Pacman#Hooks|hook]]. Example:<br />
<br />
[Trigger]<br />
Operation = Install<br />
Operation = Remove<br />
Type = Package<br />
Target = *<br />
<br />
[Action]<br />
When = PostTransaction<br />
Exec = /bin/sh -c '/usr/bin/pacman -Qqe > /etc/pkglist.txt'<br />
<br />
=== Install packages from a list ===<br />
<br />
To install packages from a previously saved list of packages, while not reinstalling previously installed packages that are already up-to-date, run:<br />
<br />
# pacman -S --needed - < pkglist.txt<br />
<br />
However, it is likely foreign packages such as from the AUR or installed locally are present in the list. To filter out from the list the foreign packages, the previous command line can be enriched as follows:<br />
<br />
# pacman -S --needed $(comm -12 <(pacman -Slq | sort) <(sort pkglist.txt))<br />
<br />
Eventually, to make sure the installed packages of your system match the list and remove all the packages that are not mentioned in it:<br />
<br />
# pacman -Rsu $(comm -23 <(pacman -Qq | sort) <(sort pkglist.txt))<br />
<br />
{{Tip|These tasks can be automated. See {{AUR|bacpac}}, {{AUR|packup}}, {{AUR|pacmanity}}, and {{AUR|pug}} for examples.}}<br />
<br />
=== Listing all changed files from packages ===<br />
<br />
If you are suspecting file corruption (e.g. by software/hardware failure), but are unsure if files were corrupted, you might want to compare with the hash sums in the packages. This can be done with {{Pkg|pacutils}}:<br />
<br />
# paccheck --md5sum --quiet<br />
<br />
For recovery of the database see [[#Restore pacman's local database]]. The {{ic|mtree}} files can also be [[#Viewing a single file inside a .pkg file|extracted as {{ic|.MTREE}} from the respective package files]].<br />
<br />
{{Note|This should '''not''' be used as is when suspecting malicious changes! In this case security precautions such as using a live medium and an independent source for the hash sums are advised.}}<br />
<br />
=== Reinstalling all packages ===<br />
<br />
To reinstall all native packages, use:<br />
<br />
# pacman -Qqn | pacman -S -<br />
<br />
Foreign (AUR) packages must be reinstalled separately; you can list them with {{ic|pacman -Qqm}}.<br />
<br />
Pacman preserves the [[installation reason]] by default.<br />
<br />
{{Warning|To force all packages to be overwritten, use {{ic|1=--overwrite=*}}, though this should be an absolute last resort. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== Restore pacman's local database ===<br />
<br />
See [[pacman/Restore local database]].<br />
<br />
=== Recovering a USB key from existing install ===<br />
<br />
If you have Arch installed on a USB key and manage to mess it up (e.g. removing it while it is still being written to), then it is possible to re-install all the packages and hopefully get it back up and working again (assuming USB key is mounted in {{ic|/newarch}})<br />
<br />
# pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman<br />
<br />
=== Viewing a single file inside a .pkg file ===<br />
<br />
For example, if you want to see the contents of {{ic|/etc/systemd/logind.conf}} supplied within the {{Pkg|systemd}} package:<br />
<br />
$ bsdtar -xOf /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz etc/systemd/logind.conf<br />
<br />
Or you can use {{Pkg|vim}} to browse the archive:<br />
<br />
$ vim /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz<br />
<br />
=== Find applications that use libraries from older packages ===<br />
<br />
Already running processes do not automatically notice changes caused by updates. Instead, they continue using old library versions. That may be undesirable, due to potential issues related to security vulnerabilities or other bugs, and version incompatibility.<br />
<br />
Processes depending on updated libraries may be found using either {{pkg|htop}}, which highlights the names of the affected programs, or with a snippet based on {{pkg|lsof}}, which also prints the names of the libraries:<br />
<br />
# lsof +c 0 | grep -w DEL | awk '1 { print $1 ": " $NF }' | sort -u<br />
<br />
This solution will only detect files, that are normally kept opened by running processes, which basically limits it to shared libraries ({{ic|.so}} files). It may miss some dependencies, like those of Java or Python applications.<br />
<br />
=== Installing only content in required languages ===<br />
<br />
Many packages attempt to install documentation and translations in several languages. Some programs are designed to remove such unnecessary files, such as {{AUR|localepurge}}, which runs after a package is installed to delete the unneeded locale files. A more direct approach is provided through the {{ic|NoExtract}} directive in {{ic|pacman.conf}}, which prevent these files from ever being installed.<br />
<br />
{{Warning|1=Some users noted that removing locales has resulted in [[Special:Permalink/460285#Dangerous NoExtract example|unintended consequences]], even under [https://bbs.archlinux.org/viewtopic.php?id=250846 Xorg].}}<br />
<br />
The example below installs English (US) files, or none at all:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
NoExtract = usr/share/help/* !usr/share/help/C/*<br />
NoExtract = usr/share/gtk-doc/html/*<br />
NoExtract = usr/share/locale/* usr/share/X11/locale/*/* usr/share/i18n/locales/* opt/google/chrome/locales/* !usr/share/X11/locale/C/*<br />
NoExtract = !*locale*/en*/* !usr/share/*locale*/locale.*<br />
NoExtract = !usr/share/*locales/en_?? !usr/share/*locales/i18n* !usr/share/*locales/iso*<br />
NoExtract = usr/share/i18n/charmaps/* !usr/share/i18n/charmaps/UTF-8.gz<br />
NoExtract = !usr/share/*locales/trans*<br />
NoExtract = usr/share/man/* !usr/share/man/man*<br />
NoExtract = usr/share/vim/vim*/lang/*<br />
NoExtract = usr/lib/libreoffice/help/en-US/*<br />
NoExtract = usr/share/kbd/locale/*<br />
NoExtract = usr/share/*/translations/*.qm usr/share/*/nls/*.qm usr/share/qt/translations/*.pak !*/en-US.pak # Qt apps<br />
NoExtract = usr/share/*/locales/*.pak opt/*/locales/*.pak usr/lib/*/locales/*.pak !*/en-US.pak # Electron apps<br />
NoExtract = opt/onlyoffice/desktopeditors/dictionaries/* !opt/onlyoffice/desktopeditors/dictionaries/en_US/*<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/locale/* !*/en.json<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/resources/help/* !*/help/en/*<br />
NoExtract = opt/onlyoffice/desktopeditors/converter/empty/*/*<br />
NoExtract = usr/share/ibus/dicts/emoji-*.dict !usr/share/ibus/dicts/emoji-en.dict<br />
}}<br />
<br />
== Performance ==<br />
<br />
=== Download speeds ===<br />
<br />
When downloading packages ''pacman'' uses the mirrors in the order they are in {{ic|/etc/pacman.d/mirrorlist}}. The mirror which is at the top of the list by default however may not be the fastest for you. To select a faster mirror, see [[Mirrors]].<br />
<br />
Pacman's speed in downloading packages can also be improved by using a different application to download packages, instead of ''pacman''<nowiki/>'s built-in file downloader, or by [[pacman#Enabling parallel downloads|enabling parallel downloads]].<br />
<br />
In all cases, make sure you have the latest ''pacman'' before doing any modifications.<br />
<br />
# pacman -Syu<br />
<br />
==== Powerpill ====<br />
<br />
[[Powerpill]] is a ''pacman'' wrapper that uses parallel and segmented downloading to try to speed up downloads for ''pacman''.<br />
<br />
==== wget ====<br />
<br />
This is also very handy if you need more powerful proxy settings than ''pacman''<nowiki/>'s built-in capabilities. <br />
<br />
To use {{ic|wget}}, first [[install]] the {{Pkg|wget}} package then modify {{ic|/etc/pacman.conf}} by uncommenting the following line in the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/wget --passive-ftp --show-progress -c -q -N %u<br />
<br />
Instead of uncommenting the {{ic|wget}} parameters in {{ic|/etc/pacman.conf}}, you can also modify the {{ic|wget}} configuration file directly (the system-wide file is {{ic|/etc/wgetrc}}, per user files are {{ic|$HOME/.wgetrc}}).<br />
<br />
==== aria2 ====<br />
<br />
[[aria2]] is a lightweight download utility with support for resumable and segmented HTTP/HTTPS and FTP downloads. aria2 allows for multiple and simultaneous HTTP/HTTPS and FTP connections to an Arch mirror, which should result in an increase in download speeds for both file and package retrieval.<br />
<br />
{{Note|Using aria2c in ''pacman''<nowiki/>'s XferCommand will '''not''' result in parallel downloads of multiple packages. Pacman invokes the XferCommand with a single package at a time and waits for it to complete before invoking the next. To download multiple packages in parallel, see [[Powerpill]].}}<br />
<br />
Install {{Pkg|aria2}}, then edit {{ic|/etc/pacman.conf}} by adding the following line to the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u<br />
<br />
{{Tip|1=[https://bbs.archlinux.org/viewtopic.php?pid=1491879#p1491879 This alternative configuration for using pacman with aria2] tries to simplify configuration and adds more configuration options.}}<br />
<br />
See {{man|1|aria2c|OPTIONS}} for used aria2c options.<br />
<br />
* {{ic|-d, --dir}}: The directory to store the downloaded file(s) as specified by ''pacman''.<br />
* {{ic|-o, --out}}: The output file name(s) of the downloaded file(s). <br />
* {{ic|%o}}: Variable which represents the local filename(s) as specified by ''pacman''.<br />
* {{ic|%u}}: Variable which represents the download URL as specified by ''pacman''.<br />
<br />
==== Other applications ====<br />
<br />
There are other downloading applications that you can use with ''pacman''. Here they are, and their associated XferCommand settings:<br />
<br />
* {{ic|snarf}}: {{ic|1=XferCommand = /usr/bin/snarf -N %u}}<br />
* {{ic|lftp}}: {{ic|1=XferCommand = /usr/bin/lftp -c pget %u}}<br />
* {{ic|axel}}: {{ic|1=XferCommand = /usr/bin/axel -n 2 -v -a -o %o %u}}<br />
* {{ic|hget}}: {{ic|1=XferCommand = /usr/bin/hget %u -n 2 -skip-tls false}} (please read the [https://github.com/huydx/hget documentation on the Github project page] for more info)<br />
* {{ic|saldl}}: {{ic|1=XferCommand = /usr/bin/saldl -c6 -l4 -s2m -o %o %u}} (please read the [https://saldl.github.io documentation on the project page] for more info)<br />
<br />
== Utilities ==<br />
<br />
* {{App|Lostfiles|Script that identifies files not owned by any package.|https://github.com/graysky2/lostfiles|{{Pkg|lostfiles}}}}<br />
* {{App|pacutils|Helper library for libalpm based programs.|https://github.com/andrewgregory/pacutils|{{Pkg|pacutils}}}}<br />
* {{App|[[pkgfile]]|Tool that finds what package owns a file.|https://github.com/falconindy/pkgfile|{{Pkg|pkgfile}}}}<br />
* {{App|pkgtools|Collection of scripts for Arch Linux packages.|https://github.com/Daenyth/pkgtools|{{AUR|pkgtools}}}}<br />
* {{App|pkgtop|Interactive package manager and resource monitor designed for the GNU/Linux.|https://github.com/orhun/pkgtop|{{AUR|pkgtop-git}}}}<br />
* {{App|[[Powerpill]]|Uses parallel and segmented downloading through [[aria2]] and [[Reflector]] to try to speed up downloads for ''pacman''.|https://xyne.dev/projects/powerpill/|{{AUR|powerpill}}}}<br />
* {{App|repoctl|Tool to help manage local repositories.|https://github.com/cassava/repoctl|{{AUR|repoctl}}}}<br />
* {{App|repose|An Arch Linux repository building tool.|https://github.com/vodik/repose|{{Pkg|repose}}}}<br />
* {{App|[[Snapper#Wrapping_pacman_transactions_in_snapshots|snap-pac]]|Make ''pacman'' automatically use snapper to create pre/post snapshots like openSUSE's YaST.|https://github.com/wesbarnett/snap-pac|{{Pkg|snap-pac}}}}<br />
* {{App|vrms-arch|A virtual Richard M. Stallman to tell you which non-free packages are installed.|https://github.com/orospakr/vrms-arch|{{AUR|vrms-arch-git}}}}<br />
<br />
=== Graphical ===<br />
<br />
{{Warning|PackageKit opens up system permissions by default, and is otherwise not recommended for general usage. See {{Bug|50459}} and {{Bug|57943}}.}}<br />
<br />
* {{App|Apper|Qt 5 application and package manager using PackageKit written in C++. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://userbase.kde.org/Apper|{{Pkg|apper}}}}<br />
* {{App|Deepin App Store|Third party app store for DDE built with DTK, using PackageKit. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://github.com/dekzi/dde-store|{{Pkg|deepin-store}}}}<br />
* {{App|Discover|Qt 5 application manager using PackageKit written in C++/QML. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://userbase.kde.org/Discover|{{Pkg|discover}}}}<br />
* {{App|GNOME PackageKit|GTK 3 package manager using PackageKit written in C.|https://freedesktop.org/software/PackageKit/|{{Pkg|gnome-packagekit}}}}<br />
* {{App|GNOME Software|GTK 3 application manager using PackageKit written in C. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://wiki.gnome.org/Apps/Software|{{Pkg|gnome-software}}}}<br />
* {{App|pcurses|Curses TUI ''pacman'' wrapper written in C++.|https://github.com/schuay/pcurses|{{AUR|pcurses}}}}<br />
* {{App|tkPacman|Tk pacman wrapper written in Tcl.|https://sourceforge.net/projects/tkpacman|{{AUR|tkpacman}}}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=724288User:Cmsigler/Wireguard Configuration Guide2022-03-25T12:28:50Z<p>Cmsigler: Update configurations to support dual stack networking with IPv6 as equal protocol</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
{{note|For information on WireGuard under Arch, see the [[WireGuard|Arch Linux WireGuard page]].}}<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,2001:db8:1234:5678::1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, 2001:db8:1234:5678::1/64, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, 2001:db8:1234:5678::2/128, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, 2001:db8:1234:5678::2/128, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 2001:db8:1234:5678::1, 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=2001:db8:1234:5678::2/128, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=2001:db8:1234:5678::1/64<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=yes<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::/0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=2001:db8:1234:5678::2/128<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=2001:db8:1234:5678::1<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=yes<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=yes<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=2001:db8:1234:5678::1<br />
GatewayOnLink=yes<br />
Table=1000<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=yes<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=" lines under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Matrix&diff=724142Matrix2022-03-24T13:33:45Z<p>Cmsigler: [pedantic grammar] Edit for possessive pronoun and indefinite article</p>
<hr />
<div>[[Category:Instant messaging]]<br />
[[Category:Voice over IP]]<br />
[[ja:Matrix]]<br />
[https://matrix.org/ Matrix] is an ambitious new ecosystem for open federated instant messaging and VoIP. It consists of servers, [https://matrix.org/clients/ clients] and [https://matrix.org/bridges/ bridge] software to connect to existing messaging solutions like [[wikipedia:Internet Relay Chat|IRC]].<br />
<br />
A Matrix channel for Arch Linux exists at [https://matrix.to/#/#archlinux:archlinux.org #archlinux:archlinux.org]. Some international communities have their own matrix rooms; see [[International communities]] for details.<br />
<br />
You can either use an existing server like https://matrix.org or host your own Synapse server, which is described below.<br />
<br />
See [[List of applications#Matrix clients]] for a list of Matrix clients.<br />
<br />
== Installation ==<br />
<br />
The reference server implementation '''Synapse''' is available in the community repository as {{Pkg|matrix-synapse}}. The community package creates a ''synapse'' user.<br />
<br />
== Configuration ==<br />
<br />
After installation, a configuration file needs to be generated. It should be readable by the ''synapse'' user:<br />
<br />
{{bc|1=<br />
$ cd /var/lib/synapse<br />
$ sudo -u synapse python -m synapse.app.homeserver \<br />
--server-name my.domain.name \<br />
--config-path /etc/synapse/homeserver.yaml \<br />
--generate-config \<br />
--report-stats=yes<br />
}}<br />
<br />
Note that this will generate corresponding SSL keys and self-signed certificates for the specified server name. You have to regenerate those if you change the server name.<br />
<br />
If your Synapse server is meant to be accessed over the internet, it is highly recommended to configure a [https://github.com/matrix-org/synapse/blob/develop/docs/reverse_proxy.md reverse proxy].<br />
<br />
== Service ==<br />
<br />
The {{ic|synapse.service}} systemd service is included in the {{Pkg|matrix-synapse}} package. It will start the synapse server as user ''synapse'' and use the configuration file {{ic|/etc/synapse/homeserver.yaml}}.<br />
<br />
== User management ==<br />
<br />
You need at least one user on your fresh synapse server. You may create one as your normal non-root user with the command<br />
<br />
{{bc|<nowiki>$ register_new_matrix_user -c /etc/synapse/homeserver.yaml http://127.0.0.1:8008</nowiki>}}<br />
<br />
or using one of the [https://matrix.org/docs/projects/try-matrix-now.html matrix clients], for example {{Pkg|element-desktop}}, or the {{AUR|purple-matrix-git}} plug-in for {{pkg|libpurple}}.<br />
<br />
== Spider Webcrawler ==<br />
<br />
To enable the webcrawler, for server generated link previews, the additional packages {{Pkg|python-lxml}} and {{Pkg|python-netaddr}} have to be installed. After that the option {{ic|url_preview_enabled: True}} can be set in your {{ic|homeserver.yaml}}. To prevent the synapse server from issuing arbitrary GET requests to internal hosts the {{ic|url_preview_ip_range_blacklist:}} has to be set.<br />
<br />
{{Warning|The blacklist is blank by default: without configuration the synapse server can crawl all your internal hosts.}}<br />
<br />
There are some examples that can be uncommented. Add your local IP ranges to that list to prevent the synapse server from trying to crawl them. After changing the {{ic|homeserver.yaml}} the service has to be restarted.<br />
<br />
== Interesting channels ==<br />
<br />
KDE community has a wide variety of matrix rooms for specific applications, languages, events and etc. See https://community.kde.org/Matrix for details.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Read-only file system ===<br />
<br />
By default, synapse can only write to the working-directory ({{ic|/var/lib/synapse}}) set in its service file. A write-error may occur if synapse writes to a different path (e.g. your media-store is in {{ic|/var/lib/matrix-synapse/media}}).<br />
<br />
You can allow access to other directories by creating a [[replacement unit file]] for {{ic|synapse.service}} and by adding {{ic|1=ReadWritePaths=''your_paths''}} to the {{ic|[Service]}} section.</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Nspawn_Configuration_Guide&diff=723971User:Cmsigler/Nspawn Configuration Guide2022-03-22T15:32:11Z<p>Cmsigler: Edit instructions to create _tmp directory; this was missing :\</p>
<hr />
<div>My Personal Step-by-step Guide to systemd-nspawn Container Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/21<br />
<br />
{{note|<b>This guide is a work-in-progress. Please use appropriately.</b>}}<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
<br />
== Create and set up Arch Linux container ==<br />
<br />
$ sudo pacman -S --needed arch-install-scripts<br />
$ mkdir ~/containers<br />
$ cd ~/containers<br />
$ sudo mkdir ./container_name<br />
$ sudo pacstrap -c ./container_name base [addl. pkgs./groups]<br />
$ sudo systemd-nspawn -D ./container_name<br />
# passwd<br />
# useradd -m -G wheel regularuser<br />
# passwd regularuser<br />
# logout<br />
<br />
== Configure host for container operation ==<br />
<br />
=== Configure host networking to support container ===<br />
<br />
$ sudo cp -ip /usr/lib/systemd/network/80-container-vz.network /etc/systemd/network<br />
$ sudo vi /etc/systemd/network/80-container-vz.network<br />
<br />
{{hc|/etc/systemd/network/80-container-vz.network|<nowiki>[Match]<br />
Name=vz-*<br />
Driver=bridge<br />
<br />
[Network]<br />
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.<br />
Address=10.10.0.1/24<br />
Address=fd89:abc1:def2:10::1/64<br />
IPMasquerade=both<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
DHCPServer=no<br />
IPv6SendRA=yes</nowiki>}}<br />
<br />
{{note|Configuration of IP masquerading, along with sysctl IP forwarding kernel parameters, is configured by use of {{ic|1=IPMasquerade=both}} in the {{ic|1=[Network]}} section of {{ic|1=80-container-vz.network}}}}<br />
<br />
=== Add nftables rule to forward chain to allow forwarding ===<br />
<br />
{{hc|host_ruleset.nft|<nowiki>table inet inet-local-table {<br />
...<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
...<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all systemd-nspawn container traffic to be forwarded upstream<br />
iifname vz-* oif $upstream-if accept<br />
...<br />
}<br />
}</nowiki>}}<br />
$ sudo nft flush ruleset<br />
$ sudo nft -f host_ruleset.nft<br />
<br />
== Configure container for operation ==<br />
<br />
=== Configure container networking ===<br />
<br />
In container, edit /etc/systemd/network/80-container-host0.network:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|<nowiki>[Match]<br />
Virtualization=container<br />
Name=host0<br />
<br />
[Network]<br />
DHCP=no<br />
Address=10.10.0.2/24<br />
Gateway=10.10.0.1<br />
Address=fd89:abc1:def2:10::2/64<br />
Gateway=fd89:abc1:def2:10::1<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
<br />
[DHCP]<br />
UseTimezone=yes</nowiki>}}<br />
<br />
=== Boot into and set up container ===<br />
<br />
$ sudo systemd-nspawn -b -D ./container_name --network-zone=nspawn0<br />
<br />
Log into container as root.<br />
<br />
# systemctl enable systemd-networkd<br />
# systemctl start systemd-networkd<br />
# systemctl enable systemd-resolved<br />
# systemctl start systemd-resolved<br />
# systemctl enable sshd<br />
# systemctl start sshd<br />
# reboot<br />
...<br />
[Continue setting up, install additional packages, configure, and/or run your container]<br />
...<br />
# poweroff<br />
<br />
== Container operation ==<br />
<br />
=== Set up container to run as a machine ===<br />
<br />
Move container to /var/lib/machines, then create a .nspawn file for operation via machinectl, etc.:<br />
<br />
$ sudo mv ./container_name /var/lib/machines<br />
$ sudo vi /etc/systemd/nspawn/container_name.nspawn<br />
{{hc|/etc/systemd/nspawn/container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
=== Enable and start container ===<br />
<br />
$ sudo machinectl enable container_name<br />
$ sudo machinectl start container_name<br />
$ sudo machinectl login container_name<br />
<br />
== Use container as base ==<br />
<br />
Using a configured minimal container as the base image for a tailored, single-app container:<br />
<br />
* Configure .nspawn file:<br />
<br />
$ sudo vi /etc/systemd/nspawn/single-app_container_name.nspawn<br />
<br />
{{hc|/etc/systemd/nspawn/single-app_container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
* First time only -- Create directories:<br />
<br />
$ sudo mkdir /var/lib/machines/single-app_container_name<br />
{{note|{{ic|/var/lib/machines_overlay}} is non-standard and unmanaged by the system.}}<br />
$ sudo mkdir /var/lib/machines_overlay<br />
$ sudo chmod 700 /var/lib/machines_overlay<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_root<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_tmp<br />
<br />
* Each time before starting overlay container, perform overlay mount:<br />
<br />
$ sudo mount -t overlay overlay -o lowerdir=/var/lib/machines/base_container_name,upperdir=/var/lib/machines_overlay/single-app_container_root,workdir=/var/lib/machines_overlay/single-app_container_tmp /var/lib/machines/single-app_container_name<br />
<br />
* Start and login to overlay container:<br />
<br />
$ sudo machinectl start single-app_container_name<br />
$ sudo machinectl login single-app_container_name<br />
<br />
* Log into single-app container, then use pacman to install desired packages; configure container for operation.<br />
<br />
* When finished, stop overlay container and unmount overlay:<br />
<br />
$ sudo machinectl stop single-app_container_name<br />
$ sudo umount /var/lib/machines/single-app_container_name</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Talk:Vagrant&diff=723967Talk:Vagrant2022-03-22T14:03:43Z<p>Cmsigler: vagrant-kvm provider dead upstream in favor of libvirt since 2015 -- suggest removing references</p>
<hr />
<div>== Plugins ==<br />
<br />
the [https://news.ycombinator.com/item?id=4408754 link] on top of the plugins section is not very informative <br />
<br />
possibly this https://www.vagrantup.com/docs/plugins/<br />
[[User:Yair|Yair]] ([[User talk:Yair|talk]]) 22:12, 3 February 2018 (UTC)<br />
<br />
== vagrant-libvirt outdated information ==<br />
<br />
The information for vagrant-libvirt is outdated. Old packages listed in the documentation are not in the repos any more. The following information is relevant today:<br />
<br />
<br />
<br />
In order for vagrant to work with libvirtd, you have to install the vagrant plugin by running:<br />
$ vagrant plugin install vagrant-libvirt<br />
<br />
For this command to work, make sure {{Pkg|base-devel}} is installed and also be sure [[libvirt]] is started.<br />
<br />
<br />
<br />
Do you guys thinks it might be a good idea to delete the old info and add the information above?<br />
<br />
{{Unsigned|21:31, 21 February 2019 (UTC)|Gheorghe}}<br />
<br />
== vagrant-kvm plugin is pinin' for the fjords ==<br />
<br />
Please see [https://github.com/adrahon/vagrant-kvm/blob/master/README.md this link]. Support was deprecated in February, 2015! [https://github.com/vagrant-libvirt/vagrant-libvirt vagrant-libvirt] is its successor.<br />
<br />
I propose removing references to kvm and perhaps adding a note about kvm deprecation in the [[vagrant#See_also]] section. If a dev agrees, please edit. I'm hesitant to since I don't use vagrant, I'm just browsing to learn more.<br />
[[User:Cmsigler|Cmsigler]] ([[User talk:Cmsigler|talk]]) 14:03, 22 March 2022 (UTC)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Nspawn_Configuration_Guide&diff=723889User:Cmsigler/Nspawn Configuration Guide2022-03-21T18:29:59Z<p>Cmsigler: Edit wording</p>
<hr />
<div>My Personal Step-by-step Guide to systemd-nspawn Container Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/21<br />
<br />
{{note|<b>This guide is a work-in-progress. Please use appropriately.</b>}}<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
<br />
== Create and set up Arch Linux container ==<br />
<br />
$ sudo pacman -S --needed arch-install-scripts<br />
$ mkdir ~/containers<br />
$ cd ~/containers<br />
$ sudo mkdir ./container_name<br />
$ sudo pacstrap -c ./container_name base [addl. pkgs./groups]<br />
$ sudo systemd-nspawn -D ./container_name<br />
# passwd<br />
# useradd -m -G wheel regularuser<br />
# passwd regularuser<br />
# logout<br />
<br />
== Configure host for container operation ==<br />
<br />
=== Configure host networking to support container ===<br />
<br />
$ sudo cp -ip /usr/lib/systemd/network/80-container-vz.network /etc/systemd/network<br />
$ sudo vi /etc/systemd/network/80-container-vz.network<br />
<br />
{{hc|/etc/systemd/network/80-container-vz.network|<nowiki>[Match]<br />
Name=vz-*<br />
Driver=bridge<br />
<br />
[Network]<br />
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.<br />
Address=10.10.0.1/24<br />
Address=fd89:abc1:def2:10::1/64<br />
IPMasquerade=both<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
DHCPServer=no<br />
IPv6SendRA=yes</nowiki>}}<br />
<br />
{{note|Configuration of IP masquerading, along with sysctl IP forwarding kernel parameters, is configured by use of {{ic|1=IPMasquerade=both}} in the {{ic|1=[Network]}} section of {{ic|1=80-container-vz.network}}}}<br />
<br />
=== Add nftables rule to forward chain to allow forwarding ===<br />
<br />
{{hc|host_ruleset.nft|<nowiki>table inet inet-local-table {<br />
...<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
...<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all systemd-nspawn container traffic to be forwarded upstream<br />
iifname vz-* oif $upstream-if accept<br />
...<br />
}<br />
}</nowiki>}}<br />
$ sudo nft flush ruleset<br />
$ sudo nft -f host_ruleset.nft<br />
<br />
== Configure container for operation ==<br />
<br />
=== Configure container networking ===<br />
<br />
In container, edit /etc/systemd/network/80-container-host0.network:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|<nowiki>[Match]<br />
Virtualization=container<br />
Name=host0<br />
<br />
[Network]<br />
DHCP=no<br />
Address=10.10.0.2/24<br />
Gateway=10.10.0.1<br />
Address=fd89:abc1:def2:10::2/64<br />
Gateway=fd89:abc1:def2:10::1<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
<br />
[DHCP]<br />
UseTimezone=yes</nowiki>}}<br />
<br />
=== Boot into and set up container ===<br />
<br />
$ sudo systemd-nspawn -b -D ./container_name --network-zone=nspawn0<br />
<br />
Log into container as root.<br />
<br />
# systemctl enable systemd-networkd<br />
# systemctl start systemd-networkd<br />
# systemctl enable systemd-resolved<br />
# systemctl start systemd-resolved<br />
# systemctl enable sshd<br />
# systemctl start sshd<br />
# reboot<br />
...<br />
[Continue setting up, install additional packages, configure, and/or run your container]<br />
...<br />
# poweroff<br />
<br />
== Container operation ==<br />
<br />
=== Set up container to run as a machine ===<br />
<br />
Move container to /var/lib/machines, then create a .nspawn file for operation via machinectl, etc.:<br />
<br />
$ sudo mv ./container_name /var/lib/machines<br />
$ sudo vi /etc/systemd/nspawn/container_name.nspawn<br />
{{hc|/etc/systemd/nspawn/container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
=== Enable and start container ===<br />
<br />
$ sudo machinectl enable container_name<br />
$ sudo machinectl start container_name<br />
$ sudo machinectl login container_name<br />
<br />
== Use container as base ==<br />
<br />
Using a configured minimal container as the base image for a tailored, single-app container:<br />
<br />
* Configure .nspawn file:<br />
<br />
$ sudo vi /etc/systemd/nspawn/single-app_container_name.nspawn<br />
<br />
{{hc|/etc/systemd/nspawn/single-app_container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
* First time only -- Create directories:<br />
<br />
$ sudo mkdir /var/lib/machines/single-app_container_name<br />
{{note|{{ic|/var/lib/machines_overlay}} is non-standard and unmanaged by the system.}}<br />
$ sudo mkdir /var/lib/machines_overlay<br />
$ sudo chmod 700 /var/lib/machines_overlay<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_root<br />
<br />
* Each time before starting overlay container, perform overlay mount:<br />
<br />
$ sudo mount -t overlay overlay -o lowerdir=/var/lib/machines/base_container_name,upperdir=/var/lib/machines_overlay/single-app_container_root,workdir=/var/lib/machines_overlay/single-app_container_tmp /var/lib/machines/single-app_container_name<br />
<br />
* Start and login to overlay container:<br />
<br />
$ sudo machinectl start single-app_container_name<br />
$ sudo machinectl login single-app_container_name<br />
<br />
* Log into single-app container, then use pacman to install desired packages; configure container for operation.<br />
<br />
* When finished, stop overlay container and unmount overlay:<br />
<br />
$ sudo machinectl stop single-app_container_name<br />
$ sudo umount /var/lib/machines/single-app_container_name</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Nspawn_Configuration_Guide&diff=723888User:Cmsigler/Nspawn Configuration Guide2022-03-21T18:28:59Z<p>Cmsigler: Recursively removing and remaking the _tmp workdir each time overlay is mounted is not necessary</p>
<hr />
<div>My Personal Step-by-step Guide to systemd-nspawn Container Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/21<br />
<br />
{{note|<b>This guide is a work-in-progress. Please use appropriately.</b>}}<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
<br />
== Create and set up Arch Linux container ==<br />
<br />
$ sudo pacman -S --needed arch-install-scripts<br />
$ mkdir ~/containers<br />
$ cd ~/containers<br />
$ sudo mkdir ./container_name<br />
$ sudo pacstrap -c ./container_name base [addl. pkgs./groups]<br />
$ sudo systemd-nspawn -D ./container_name<br />
# passwd<br />
# useradd -m -G wheel regularuser<br />
# passwd regularuser<br />
# logout<br />
<br />
== Configure host for container operation ==<br />
<br />
=== Configure host networking to support container ===<br />
<br />
$ sudo cp -ip /usr/lib/systemd/network/80-container-vz.network /etc/systemd/network<br />
$ sudo vi /etc/systemd/network/80-container-vz.network<br />
<br />
{{hc|/etc/systemd/network/80-container-vz.network|<nowiki>[Match]<br />
Name=vz-*<br />
Driver=bridge<br />
<br />
[Network]<br />
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.<br />
Address=10.10.0.1/24<br />
Address=fd89:abc1:def2:10::1/64<br />
IPMasquerade=both<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
DHCPServer=no<br />
IPv6SendRA=yes</nowiki>}}<br />
<br />
{{note|Configuration of IP masquerading, along with sysctl IP forwarding kernel parameters, is configured by use of {{ic|1=IPMasquerade=both}} in the {{ic|1=[Network]}} section of {{ic|1=80-container-vz.network}}}}<br />
<br />
=== Add nftables rule to forward chain to allow forwarding ===<br />
<br />
{{hc|host_ruleset.nft|<nowiki>table inet inet-local-table {<br />
...<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
...<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all systemd-nspawn container traffic to be forwarded upstream<br />
iifname vz-* oif $upstream-if accept<br />
...<br />
}<br />
}</nowiki>}}<br />
$ sudo nft flush ruleset<br />
$ sudo nft -f host_ruleset.nft<br />
<br />
== Configure container for operation ==<br />
<br />
=== Configure container networking ===<br />
<br />
In container, edit /etc/systemd/network/80-container-host0.network:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|<nowiki>[Match]<br />
Virtualization=container<br />
Name=host0<br />
<br />
[Network]<br />
DHCP=no<br />
Address=10.10.0.2/24<br />
Gateway=10.10.0.1<br />
Address=fd89:abc1:def2:10::2/64<br />
Gateway=fd89:abc1:def2:10::1<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
<br />
[DHCP]<br />
UseTimezone=yes</nowiki>}}<br />
<br />
=== Boot into and set up container ===<br />
<br />
$ sudo systemd-nspawn -b -D ./container_name --network-zone=nspawn0<br />
<br />
Log into container as root.<br />
<br />
# systemctl enable systemd-networkd<br />
# systemctl start systemd-networkd<br />
# systemctl enable systemd-resolved<br />
# systemctl start systemd-resolved<br />
# systemctl enable sshd<br />
# systemctl start sshd<br />
# reboot<br />
...<br />
[Continue setting up, install additional packages, configure, and/or run your container]<br />
...<br />
# poweroff<br />
<br />
== Container operation ==<br />
<br />
=== Set up container to run as a machine ===<br />
<br />
Move container to /var/lib/machines, then create a .nspawn file for operation via machinectl, etc.:<br />
<br />
$ sudo mv ./container_name /var/lib/machines<br />
$ sudo vi /etc/systemd/nspawn/container_name.nspawn<br />
{{hc|/etc/systemd/nspawn/container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
=== Enable and start container ===<br />
<br />
$ sudo machinectl enable container_name<br />
$ sudo machinectl start container_name<br />
$ sudo machinectl login container_name<br />
<br />
== Use container as base ==<br />
<br />
Using a configured minimal container as the base image for a tailored, single-app container:<br />
<br />
* Configure .nspawn file:<br />
<br />
$ sudo vi /etc/systemd/nspawn/single-app_container_name.nspawn<br />
<br />
{{hc|/etc/systemd/nspawn/single-app_container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
* First time only -- Create directories:<br />
<br />
$ sudo mkdir /var/lib/machines/single-app_container_name<br />
{{note|{{ic|/var/lib/machines_overlay}} is non-standard and unmanaged by the system.}}<br />
$ sudo mkdir /var/lib/machines_overlay<br />
$ sudo chmod 700 /var/lib/machines_overlay<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_root<br />
<br />
* Each time before starting overlay container, perform overlay mount:<br />
<br />
$ sudo mount -t overlay overlay -o lowerdir=/var/lib/machines/base_container_name,upperdir=/var/lib/machines_overlay/single-app_container_root,workdir=/var/lib/machines_overlay/single-app_container_tmp /var/lib/machines/single-app_container_name<br />
<br />
* To start and login to overlay container:<br />
<br />
$ sudo machinectl start single-app_container_name<br />
$ sudo machinectl login single-app_container_name<br />
<br />
* Log into single-app container, then use pacman to install desired packages; configure container for operation.<br />
<br />
* When finished, stop overlay container and unmount overlay:<br />
<br />
$ sudo machinectl stop single-app_container_name<br />
$ sudo umount /var/lib/machines/single-app_container_name</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Nspawn_Configuration_Guide&diff=723887User:Cmsigler/Nspawn Configuration Guide2022-03-21T18:17:29Z<p>Cmsigler: Turn off private users; use default systemd-nspawn@.service and machinectl options which specify -U</p>
<hr />
<div>My Personal Step-by-step Guide to systemd-nspawn Container Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/21<br />
<br />
{{note|<b>This guide is a work-in-progress. Please use appropriately.</b>}}<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
<br />
== Create and set up Arch Linux container ==<br />
<br />
$ sudo pacman -S --needed arch-install-scripts<br />
$ mkdir ~/containers<br />
$ cd ~/containers<br />
$ sudo mkdir ./container_name<br />
$ sudo pacstrap -c ./container_name base [addl. pkgs./groups]<br />
$ sudo systemd-nspawn -D ./container_name<br />
# passwd<br />
# useradd -m -G wheel regularuser<br />
# passwd regularuser<br />
# logout<br />
<br />
== Configure host for container operation ==<br />
<br />
=== Configure host networking to support container ===<br />
<br />
$ sudo cp -ip /usr/lib/systemd/network/80-container-vz.network /etc/systemd/network<br />
$ sudo vi /etc/systemd/network/80-container-vz.network<br />
<br />
{{hc|/etc/systemd/network/80-container-vz.network|<nowiki>[Match]<br />
Name=vz-*<br />
Driver=bridge<br />
<br />
[Network]<br />
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.<br />
Address=10.10.0.1/24<br />
Address=fd89:abc1:def2:10::1/64<br />
IPMasquerade=both<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
DHCPServer=no<br />
IPv6SendRA=yes</nowiki>}}<br />
<br />
{{note|Configuration of IP masquerading, along with sysctl IP forwarding kernel parameters, is configured by use of {{ic|1=IPMasquerade=both}} in the {{ic|1=[Network]}} section of {{ic|1=80-container-vz.network}}}}<br />
<br />
=== Add nftables rule to forward chain to allow forwarding ===<br />
<br />
{{hc|host_ruleset.nft|<nowiki>table inet inet-local-table {<br />
...<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
...<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all systemd-nspawn container traffic to be forwarded upstream<br />
iifname vz-* oif $upstream-if accept<br />
...<br />
}<br />
}</nowiki>}}<br />
$ sudo nft flush ruleset<br />
$ sudo nft -f host_ruleset.nft<br />
<br />
== Configure container for operation ==<br />
<br />
=== Configure container networking ===<br />
<br />
In container, edit /etc/systemd/network/80-container-host0.network:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|<nowiki>[Match]<br />
Virtualization=container<br />
Name=host0<br />
<br />
[Network]<br />
DHCP=no<br />
Address=10.10.0.2/24<br />
Gateway=10.10.0.1<br />
Address=fd89:abc1:def2:10::2/64<br />
Gateway=fd89:abc1:def2:10::1<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
<br />
[DHCP]<br />
UseTimezone=yes</nowiki>}}<br />
<br />
=== Boot into and set up container ===<br />
<br />
$ sudo systemd-nspawn -b -D ./container_name --network-zone=nspawn0<br />
<br />
Log into container as root.<br />
<br />
# systemctl enable systemd-networkd<br />
# systemctl start systemd-networkd<br />
# systemctl enable systemd-resolved<br />
# systemctl start systemd-resolved<br />
# systemctl enable sshd<br />
# systemctl start sshd<br />
# reboot<br />
...<br />
[Continue setting up, install additional packages, configure, and/or run your container]<br />
...<br />
# poweroff<br />
<br />
== Container operation ==<br />
<br />
=== Set up container to run as a machine ===<br />
<br />
Move container to /var/lib/machines, then create a .nspawn file for operation via machinectl, etc.:<br />
<br />
$ sudo mv ./container_name /var/lib/machines<br />
$ sudo vi /etc/systemd/nspawn/container_name.nspawn<br />
{{hc|/etc/systemd/nspawn/container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
=== Enable and start container ===<br />
<br />
$ sudo machinectl enable container_name<br />
$ sudo machinectl start container_name<br />
$ sudo machinectl login container_name<br />
<br />
== Use container as base ==<br />
<br />
Using a configured minimal container as the base image for a tailored, single-app container:<br />
<br />
* Configure .nspawn file:<br />
<br />
$ sudo vi /etc/systemd/nspawn/single-app_container_name.nspawn<br />
<br />
{{hc|/etc/systemd/nspawn/single-app_container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
;PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
* First time only -- Create directories:<br />
<br />
$ sudo mkdir /var/lib/machines/single-app_container_name<br />
{{note|{{ic|/var/lib/machines_overlay}} is non-standard and unmanaged by the system.}}<br />
$ sudo mkdir /var/lib/machines_overlay<br />
$ sudo chmod 700 /var/lib/machines_overlay<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_root<br />
<br />
* Each time overlay mount is performed:<br />
<br />
$ sudo rm -fr /var/lib/machines_overlay/single-app_container_tmp<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_tmp<br />
$ sudo mount -t overlay overlay -o lowerdir=/var/lib/machines/base_container_name,upperdir=/var/lib/machines_overlay/single-app_container_root,workdir=/var/lib/machines_overlay/single-app_container_tmp /var/lib/machines/single-app_container_name<br />
<br />
* To start and login to overlay container:<br />
<br />
$ sudo machinectl start single-app_container_name<br />
$ sudo machinectl login single-app_container_name<br />
<br />
* Log into single-app container, then use pacman to install desired packages; configure container for operation.</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Nspawn_Configuration_Guide&diff=723831User:Cmsigler/Nspawn Configuration Guide2022-03-21T13:39:38Z<p>Cmsigler: Initial commit</p>
<hr />
<div>My Personal Step-by-step Guide to systemd-nspawn Container Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/21<br />
<br />
{{note|<b>This guide is a work-in-progress. Please use appropriately.</b>}}<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
<br />
== Create and set up Arch Linux container ==<br />
<br />
$ sudo pacman -S --needed arch-install-scripts<br />
$ mkdir ~/containers<br />
$ cd ~/containers<br />
$ sudo mkdir ./container_name<br />
$ sudo pacstrap -c ./container_name base [addl. pkgs./groups]<br />
$ sudo systemd-nspawn -D ./container_name<br />
# passwd<br />
# useradd -m -G wheel regularuser<br />
# passwd regularuser<br />
# logout<br />
<br />
== Configure host for container operation ==<br />
<br />
=== Configure host networking to support container ===<br />
<br />
$ sudo cp -ip /usr/lib/systemd/network/80-container-vz.network /etc/systemd/network<br />
$ sudo vi /etc/systemd/network/80-container-vz.network<br />
<br />
{{hc|/etc/systemd/network/80-container-vz.network|<nowiki>[Match]<br />
Name=vz-*<br />
Driver=bridge<br />
<br />
[Network]<br />
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.<br />
Address=10.10.0.1/24<br />
Address=fd89:abc1:def2:10::1/64<br />
IPMasquerade=both<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
DHCPServer=no<br />
IPv6SendRA=yes</nowiki>}}<br />
<br />
{{note|Configuration of IP masquerading, along with sysctl IP forwarding kernel parameters, is configured by use of {{ic|1=IPMasquerade=both}} in the {{ic|1=[Network]}} section of {{ic|1=80-container-vz.network}}}}<br />
<br />
=== Add nftables rule to forward chain to allow forwarding ===<br />
<br />
{{hc|host_ruleset.nft|<nowiki>table inet inet-local-table {<br />
...<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
...<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all systemd-nspawn container traffic to be forwarded upstream<br />
iifname vz-* oif $upstream-if accept<br />
...<br />
}<br />
}</nowiki>}}<br />
$ sudo nft flush ruleset<br />
$ sudo nft -f host_ruleset.nft<br />
<br />
== Configure container for operation ==<br />
<br />
=== Configure container networking ===<br />
<br />
In container, edit /etc/systemd/network/80-container-host0.network:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|<nowiki>[Match]<br />
Virtualization=container<br />
Name=host0<br />
<br />
[Network]<br />
DHCP=no<br />
Address=10.10.0.2/24<br />
Gateway=10.10.0.1<br />
Address=fd89:abc1:def2:10::2/64<br />
Gateway=fd89:abc1:def2:10::1<br />
IPv6PrivacyExtensions=yes<br />
LinkLocalAddressing=yes<br />
LLDP=yes<br />
EmitLLDP=customer-bridge<br />
<br />
[DHCP]<br />
UseTimezone=yes</nowiki>}}<br />
<br />
=== Boot into and set up container ===<br />
<br />
$ sudo systemd-nspawn -b -D ./container_name --network-zone=nspawn0<br />
<br />
Log into container as root.<br />
<br />
# systemctl enable systemd-networkd<br />
# systemctl start systemd-networkd<br />
# systemctl enable systemd-resolved<br />
# systemctl start systemd-resolved<br />
# systemctl enable sshd<br />
# systemctl start sshd<br />
# reboot<br />
...<br />
[Continue setting up, install additional packages, configure, and/or run your container]<br />
...<br />
# poweroff<br />
<br />
== Container operation ==<br />
<br />
=== Set up container to run as a machine ===<br />
<br />
Move container to /var/lib/machines, then create a .nspawn file for operation via machinectl, etc.:<br />
<br />
$ sudo mv ./container_name /var/lib/machines<br />
$ sudo vi /etc/systemd/nspawn/container_name.nspawn<br />
{{hc|/etc/systemd/nspawn/container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
=== Enable and start container ===<br />
<br />
$ sudo machinectl enable container_name<br />
$ sudo machinectl start container_name<br />
$ sudo machinectl login container_name<br />
<br />
== Use container as base ==<br />
<br />
Using a configured minimal container as the base image for a tailored, single-app container:<br />
<br />
* Configure .nspawn file:<br />
<br />
$ sudo vi /etc/systemd/nspawn/single-app_container_name.nspawn<br />
<br />
{{hc|/etc/systemd/nspawn/single-app_container_name.nspawn|<nowiki>[Exec]<br />
Boot=on<br />
PrivateUsers=no<br />
<br />
[Network]<br />
Zone=nspawn0</nowiki>}}<br />
<br />
* First time only -- Create directories:<br />
<br />
$ sudo mkdir /var/lib/machines/single-app_container_name<br />
{{note|{{ic|/var/lib/machines_overlay}} is non-standard and unmanaged by the system.}}<br />
$ sudo mkdir /var/lib/machines_overlay<br />
$ sudo chmod 700 /var/lib/machines_overlay<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_root<br />
<br />
* Each time overlay mount is performed:<br />
<br />
$ sudo rm -fr /var/lib/machines_overlay/single-app_container_tmp<br />
$ sudo mkdir /var/lib/machines_overlay/single-app_container_tmp<br />
$ sudo mount -t overlay overlay -o lowerdir=/var/lib/machines/base_container_name,upperdir=/var/lib/machines_overlay/single-app_container_root,workdir=/var/lib/machines_overlay/single-app_container_tmp /var/lib/machines/single-app_container_name<br />
<br />
* To start and login to overlay container:<br />
<br />
$ sudo machinectl start single-app_container_name<br />
$ sudo machinectl login single-app_container_name<br />
<br />
* Log into single-app container, then use pacman to install desired packages; configure container for operation.</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=723830User:Cmsigler/Wireguard Configuration Guide2022-03-21T13:27:23Z<p>Cmsigler: Update to use note template</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
{{note|These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV}}<br />
{{note|For information on WireGuard under Arch, see the [[WireGuard|Arch Linux WireGuard page]].}}<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=yes<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::/0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=yes<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=yes<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=yes<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=723540Pacman/Tips and tricks2022-03-19T17:25:56Z<p>Cmsigler: Add overlay mount options that have proven helpful to silence warnings, etc.</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[de:Pacman-Tipps]]<br />
[[es:Pacman (Español)/Tips and tricks]]<br />
[[fa:Pacman tips]]<br />
[[fr:Pacman (Français)/Tips and tricks]]<br />
[[ja:Pacman ヒント]]<br />
[[pt:Pacman (Português)/Tips and tricks]]<br />
[[ru:Pacman (Русский)/Tips and tricks]]<br />
[[zh-hans:Pacman (简体中文)/Tips and tricks]]<br />
{{Related articles start}}<br />
{{Related|Mirrors}}<br />
{{Related|Creating packages}}<br />
{{Related articles end}}<br />
For general methods to improve the flexibility of the provided tips or ''pacman'' itself, see [[Core utilities]] and [[Bash]].<br />
<br />
== Maintenance ==<br />
<br />
{{Expansion|{{ic|1=Usage=}} introduced with ''pacman'' 4.2, see [http://allanmcrae.com/2014/12/pacman-4-2-released/]}}<br />
<br />
{{Note|Instead of using ''comm'' (which requires sorted input with ''sort'') in the sections below, you may also use {{ic|grep -Fxf}} or {{ic|grep -Fxvf}}.}}<br />
<br />
See also [[System maintenance]].<br />
<br />
=== Listing packages ===<br />
<br />
==== With version ====<br />
<br />
You may want to get the list of installed packages with their version, which is useful when reporting bugs or discussing installed packages.<br />
<br />
* List all explicitly installed packages: {{ic|pacman -Qe}}.<br />
* List all packages in the [[package group]] named {{ic|''group''}}: {{ic|pacman -Sg ''group''}}<br />
* List all foreign packages (typically manually downloaded and installed or packages removed from the repositories): {{ic|pacman -Qm}}.<br />
* List all native packages (installed from the sync database(s)): {{ic|pacman -Qn}}.<br />
* List all explicitly installed native packages (i.e. present in the sync database) that are not direct or optional dependencies: {{ic|pacman -Qent}}.<br />
* List packages by regex: {{ic|pacman -Qs ''regex''}}.<br />
* List packages by regex with custom output format (needs {{Pkg|expac}}): {{ic|expac -s "%-30n %v" ''regex''}}.<br />
<br />
==== With size ====<br />
<br />
Figuring out which packages are largest can be useful when trying to free space on your hard drive. There are two options here: get the size of individual packages, or get the size of packages and their dependencies.<br />
<br />
===== Individual packages =====<br />
<br />
The following command will list all installed packages and their individual sizes:<br />
<br />
$ LC_ALL=C pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -h<br />
<br />
===== Packages and dependencies =====<br />
<br />
To list package sizes with their dependencies,<br />
<br />
* Install {{Pkg|expac}} and run {{ic|expac -H M '%m\t%n' {{!}} sort -h}}.<br />
* Run {{Pkg|pacgraph}} with the {{ic|-c}} option.<br />
<br />
To list the download size of several packages (leave {{ic|''packages''}} blank to list all packages):<br />
<br />
$ expac -S -H M '%k\t%n' ''packages''<br />
<br />
To list explicitly installed packages not in the [[meta package]] {{Pkg|base}} nor [[package group]] {{Grp|base-devel}} with size and description:<br />
<br />
$ expac -H M "%011m\t%-20n\t%10d" $(comm -23 <(pacman -Qqen | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort | uniq)) | sort -n<br />
<br />
To list the packages marked for upgrade with their download size<br />
<br />
$ expac -S -H M '%k\t%n' $(pacman -Qqu) | sort -sh<br />
<br />
==== By date ====<br />
<br />
To list the 20 last installed packages with {{Pkg|expac}}, run:<br />
<br />
$ expac --timefmt='%Y-%m-%d %T' '%l\t%n' | sort | tail -n 20<br />
<br />
or, with seconds since the epoch (1970-01-01 UTC):<br />
<br />
$ expac --timefmt=%s '%l\t%n' | sort -n | tail -n 20<br />
<br />
==== Not in a specified group, repository or meta package ====<br />
<br />
{{Note|To get a list of packages installed as dependencies but no longer required by any installed package, see [[#Removing unused packages (orphans)]].<br />
}}<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} [[meta package]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <(expac -l '\n' '%E' base | sort)<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} meta package or {{Grp|base-devel}} [[package group]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
List all installed packages unrequired by other packages, and which are not in the {{Pkg|base}} meta package or {{Grp|base-devel}} package group:<br />
<br />
$ comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
As above, but with descriptions:<br />
<br />
$ expac -H M '%-20n\t%10d' $(comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u))<br />
<br />
List all installed packages that are ''not'' in the specified repository ''repo_name''<br />
<br />
$ comm -23 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all installed packages that are in the ''repo_name'' repository:<br />
<br />
$ comm -12 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all packages on the Arch Linux ISO that are not in the {{Pkg|base}} meta package:<br />
<br />
<nowiki>$ comm -23 <(curl https://gitlab.archlinux.org/archlinux/archiso/-/raw/master/configs/releng/packages.x86_64) <(expac -l '\n' '%E' base | sort)</nowiki><br />
<br />
{{Tip|Alternatively, use {{ic|combine}} (instead of {{ic|comm}}) from the {{Pkg|moreutils}} package which has a syntax that is easier to remember. See {{man|1|combine}}.}}<br />
<br />
==== Development packages ====<br />
<br />
To list all development/unstable packages, run:<br />
<br />
$ pacman -Qq | grep -Ee '-(bzr|cvs|darcs|git|hg|svn)$'<br />
<br />
=== Browsing packages ===<br />
<br />
To browse all installed packages with an instant preview of each package:<br />
<br />
$ pacman -Qq | fzf --preview 'pacman -Qil {}' --layout=reverse --bind 'enter:execute(pacman -Qil {} | less)'<br />
<br />
This uses [[fzf]] to present a two-pane view listing all packages with package info shown on the right.<br />
<br />
Enter letters to filter the list of packages; use arrow keys (or {{ic|Ctrl-j}}/{{ic|Ctrl-k}}) to navigate; press {{ic|Enter}} to see package info under ''less''.<br />
<br />
To browse all packages currently known to ''pacman'' (both installed and not yet installed) in a similar way, using fzf, use:<br />
<br />
$ pacman -Slq | fzf --preview 'pacman -Si {}' --layout=reverse<br />
<br />
The navigational keybindings are the same, although Enter will not work in the same way.<br />
<br />
=== Listing files owned by a package with size ===<br />
<br />
This one might come in handy if you have found that a specific package uses a huge amount of space and you want to find out which files make up the most of that.<br />
<br />
$ pacman -Qlq ''package'' | grep -v '/$' | xargs -r du -h | sort -h<br />
<br />
=== Identify files not owned by any package ===<br />
<br />
If your system has stray files not owned by any package (a common case if you do not [[Enhance system stability#Use the package manager to install software|use the package manager to install software]]), you may want to find such files in order to clean them up.<br />
<br />
One method is to use {{ic|pacreport --unowned-files}} as the root user from {{Pkg|pacutils}} which will list unowned files among other details.<br />
<br />
Another is to list all files of interest and check them against ''pacman'':<br />
<br />
# find /etc /usr /opt | LC_ALL=C pacman -Qqo - 2>&1 >&- >/dev/null | cut -d ' ' -f 5-<br />
<br />
{{Tip|The {{Pkg|lostfiles}} script performs similar steps, but also includes an extensive blacklist to remove common false positives from the output.}}<br />
<br />
=== Tracking unowned files created by packages ===<br />
<br />
Most systems will slowly collect several [http://ftp.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html#S3-RPM-INSIDE-FLIST-GHOST-DIRECTIVE ghost] files such as state files, logs, indexes, etc. through the course of usual operation.<br />
<br />
{{ic|pacreport}} from {{Pkg|pacutils}} can be used to track these files and their associations via {{ic|/etc/pacreport.conf}} (see {{man|1|pacreport|FILES}}).<br />
<br />
An example may look something like this (abridged):<br />
<br />
{{hc|/etc/pacreport.conf|2=<br />
[Options]<br />
IgnoreUnowned = usr/share/applications/mimeinfo.cache<br />
<br />
[PkgIgnoreUnowned]<br />
alsa-utils = var/lib/alsa/asound.state<br />
bluez = var/lib/bluetooth<br />
ca-certificates = etc/ca-certificates/trust-source/*<br />
dbus = var/lib/dbus/machine-id<br />
glibc = etc/ld.so.cache<br />
grub = boot/grub/*<br />
linux = boot/initramfs-linux.img<br />
pacman = var/lib/pacman/local<br />
update-mime-database = usr/share/mime/magic<br />
}}<br />
<br />
Then, when using {{ic|pacreport --unowned-files}} as the root user, any unowned files will be listed if the associated package is no longer installed (or if any new files have been created).<br />
<br />
Additionally, [https://github.com/CyberShadow/aconfmgr aconfmgr] ({{AUR|aconfmgr-git}}) allows tracking modified and orphaned files using a configuration script.<br />
<br />
=== Removing unused packages (orphans) ===<br />
<br />
For recursively removing orphans and their configuration files:<br />
<br />
# pacman -Qtdq | pacman -Rns -<br />
<br />
If no orphans were found, the output is {{ic|error: argument '-' specified with empty stdin}}. This is expected as no arguments were passed to {{ic|pacman -Rns}}.<br />
<br />
{{Note|The arguments {{ic|-Qt}} list only true orphans. To include packages which are ''optionally'' required by another package, pass the {{ic|-t}} flag twice (''i.e.'', {{ic|-Qtt}}).}}<br />
<br />
=== Removing everything but essential packages ===<br />
<br />
If it is ever necessary to remove all packages except the essentials packages, one method is to set the installation reason of the non-essential ones as dependency and then remove all unnecessary dependencies.<br />
<br />
First, for all the packages installed "as explicitly", change their installation reason to "as dependency":<br />
<br />
# pacman -D --asdeps $(pacman -Qqe)<br />
<br />
Then, change the installation reason to "as explicitly" of only the essential packages, those you '''do not''' want to remove, in order to avoid targeting them:<br />
<br />
# pacman -D --asexplicit base linux linux-firmware<br />
<br />
{{Note|<br />
* Additional packages can be added to the above command in order to avoid being removed. See [[Installation guide#Install essential packages]] for more info on other packages that may be necessary for a fully functional base system.<br />
* This will also select the bootloader's package for removal. The system should still be bootable, but the boot parameters might not be changeable without it.<br />
}}<br />
<br />
Finally, follow the instructions in [[#Removing unused packages (orphans)]] to remove all packages that have installation reason "as dependency".<br />
<br />
=== Getting the dependencies list of several packages ===<br />
<br />
Dependencies are alphabetically sorted and doubles are removed.<br />
<br />
{{Note|To only show the tree of local installed packages, use {{ic|pacman -Qi}}.}}<br />
<br />
$ LC_ALL=C pacman -Si ''packages'' | awk -F'[:<=>]' '/^Depends/ {print $2}' | xargs -n1 | sort -u<br />
<br />
Alternatively, with {{Pkg|expac}}: <br />
<br />
$ expac -l '\n' %E -S ''packages'' | sort -u<br />
<br />
=== Listing changed backup files ===<br />
<br />
{{Accuracy|What is the connection of this section to [[System backup]]? Listing modified "backup files" does not show files which are not tracked by ''pacman''.|section=Warning about listing changed backup files}}<br />
<br />
If you want to back up your system configuration files, you could copy all files in {{ic|/etc/}} but usually you are only interested in the files that you have changed. Modified [[Pacnew and Pacsave files#Package backup files|backup files]] can be viewed with the following command:<br />
<br />
# pacman -Qii | awk '/^MODIFIED/ {print $2}'<br />
<br />
Running this command with root permissions will ensure that files readable only by root (such as {{ic|/etc/sudoers}}) are included in the output.<br />
<br />
{{Tip|See [[#Listing all changed files from packages]] to list all changed files ''pacman'' knows about, not only backup files.}}<br />
<br />
=== Back up the pacman database ===<br />
<br />
The following command can be used to back up the local ''pacman'' database:<br />
<br />
$ tar -cjf pacman_database.tar.bz2 /var/lib/pacman/local<br />
<br />
Store the backup ''pacman'' database file on one or more offline media, such as a USB stick, external hard drive, or CD-R.<br />
<br />
The database can be restored by moving the {{ic|pacman_database.tar.bz2}} file into the {{ic|/}} directory and executing the following command:<br />
<br />
# tar -xjvf pacman_database.tar.bz2<br />
<br />
{{Note|If the ''pacman'' database files are corrupted, and there is no backup file available, there exists some hope of rebuilding the ''pacman'' database. Consult [[#Restore pacman's local database]].}}<br />
<br />
{{Tip|The {{AUR|pakbak-git}} package provides a script and a [[systemd]] service to automate the task. Configuration is possible in {{ic|/etc/pakbak.conf}}.}}<br />
<br />
=== Check changelogs easily ===<br />
<br />
When maintainers update packages, commits are often commented in a useful fashion. Users can quickly check these from the command line by installing {{AUR|pacolog}}. This utility lists recent commit messages for packages from the official repositories or the AUR, by using {{ic|pacolog ''package''}}.<br />
<br />
== Installation and recovery ==<br />
<br />
Alternative ways of getting and restoring packages.<br />
<br />
=== Installing packages from a CD/DVD or USB stick ===<br />
<br />
{{Merge|#Custom local repository|Use as an example and avoid duplication}}<br />
<br />
To download packages, or groups of packages:<br />
<br />
# cd ~/Packages<br />
# pacman -Syw --cachedir . base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Pacman, which will reference the host installation by default, will not properly resolve and download existing dependencies. In cases where all packages and dependencies are wanted, it is recommended to create a temporary blank DB and reference it with {{ic|--dbpath}}:<br />
<br />
# mkdir /tmp/blankdb<br />
# pacman -Syw --cachedir . --dbpath /tmp/blankdb base base-devel grub-bios xorg gimp<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Then you can burn the "Packages" folder to a CD/DVD or transfer it to a USB stick, external HDD, etc.<br />
<br />
To install:<br />
<br />
'''1.''' Mount the media:<br />
<br />
# mkdir /mnt/repo<br />
# mount /dev/sr0 /mnt/repo #For a CD/DVD.<br />
# mount /dev/sdxY /mnt/repo #For a USB stick.<br />
<br />
'''2.''' Edit {{ic|pacman.conf}} and add this repository ''before'' the other ones (e.g. extra, core, etc.). This is important. Do not just uncomment the one on the bottom. This way it ensures that the files from the CD/DVD/USB take precedence over those in the standard repositories:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
[custom]<br />
SigLevel = PackageRequired<br />
Server = file:///mnt/repo/Packages}}<br />
<br />
'''3.''' Finally, synchronize the ''pacman'' database to be able to use the new repository:<br />
<br />
# pacman -Syu<br />
<br />
=== Custom local repository ===<br />
<br />
Use the ''repo-add'' script included with ''pacman'' to generate a database for a personal repository. Use {{ic|repo-add --help}} for more details on its usage. <br />
A package database is a tar file, optionally compressed. Valid extensions are ''.db'' or ''.files'' followed by an archive extension of ''.tar'', ''.tar.gz'', ''.tar.bz2'', ''.tar.xz'', ''.tar.zst'', or ''.tar.Z''. The file does not need to exist, but all parent directories must exist.<br />
<br />
To add a new package to the database, or to replace the old version of an existing package in the database, run:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/package-1.0-1-x86_64.pkg.tar.xz''<br />
<br />
The database and the packages do not need to be in the same directory when using ''repo-add'', but keep in mind that when using ''pacman'' with that database, they should be together. Storing all the built packages to be included in the repository in one directory also allows to use shell glob expansion to add or update multiple packages at once:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/*.pkg.tar.xz''<br />
<br />
{{Warning|''repo-add'' adds the entries into the database in the same order as passed on the command line. If multiple versions of the same package are involved, care must be taken to ensure that the correct version is added last. In particular, note that lexical order used by the shell depends on the locale and differs from the {{man|8|vercmp}} ordering used by ''pacman''.}}<br />
<br />
If you are looking to support multiple architectures then precautions should be taken to prevent errors from occurring. Each architecture should have its own directory tree:<br />
<br />
{{hc|$ tree ~/customrepo/ {{!}} sed "s/$(uname -m)/''arch''/g"|<br />
/home/archie/customrepo/<br />
└── ''arch''<br />
├── customrepo.db -> customrepo.db.tar.xz<br />
├── customrepo.db.tar.xz<br />
├── customrepo.files -> customrepo.files.tar.xz<br />
├── customrepo.files.tar.xz<br />
└── personal-website-git-b99cce0-1-''arch''.pkg.tar.xz<br />
<br />
1 directory, 5 files<br />
}}<br />
<br />
The ''repo-add'' executable checks if the package is appropriate. If this is not the case you will be running into error messages similar to this:<br />
<br />
==> ERROR: '/home/archie/customrepo/''arch''/foo-''arch''.pkg.tar.xz' does not have a valid database archive extension.<br />
<br />
''repo-remove'' is used to remove packages from the package database, except that only package names are specified on the command line.<br />
<br />
$ repo-remove ''/path/to/repo.db.tar.gz pkgname''<br />
<br />
Once the local repository database has been created, add the repository to {{ic|pacman.conf}} for each system that is to use the repository. An example of a custom repository is in {{ic|pacman.conf}}. The repository's name is the database filename with the file extension omitted. In the case of the example above the repository's name would simply be ''repo''. Reference the repository's location using a {{ic|file://}} url, or via FTP using ftp://localhost/path/to/directory.<br />
<br />
If willing, add the custom repository to the [[Unofficial user repositories|list of unofficial user repositories]], so that the community can benefit from it.<br />
<br />
=== Network shared pacman cache ===<br />
<br />
{{Merge|Package_Proxy_Cache|Same topic}}<br />
If you happen to run several Arch boxes on your LAN, you can share packages so that you can greatly decrease your download times. Keep in mind you should not share between different architectures (i.e. i686 and x86_64) or you will run into problems.<br />
<br />
==== Read-only cache ====<br />
<br />
{{Note|1=If pacman fails to download 3 packages from the server, it will use another mirror instead. See https://bbs.archlinux.org/viewtopic.php?id=268066.}}<br />
<br />
If you are looking for a quick solution, you can simply run a [https://gist.github.com/willurd/5720255 basic temporary webserver] which other computers can use as their first mirror.<br />
<br />
First of all, make pacman databases available into the folder you will serve:<br />
<br />
# ln -s /var/lib/pacman/sync/*.db /var/cache/pacman/pkg/<br />
<br />
Then start serving this folder. For example, with [[Python]] [https://docs.python.org/3/library/http.server.html#http-server-cli http.server] module:<br />
$ python -m http.server -d /var/cache/pacman/pkg/<br />
<br />
{{Tip|By default, Python {{ic|http.server}} listens on port {{ic|8000}}. To use another port, simply add it as an argument:<br />
<br />
$ python -m http.server -d /var/cache/pacman/pkg/ 8080<br />
}}<br />
<br />
Then [[textedit|edit]] {{ic|/etc/pacman.d/mirrorlist}} on each client machine to add this server as the top entry:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|2=<br />
Server = http://''server-ip'':''port''<br />
...<br />
}}<br />
<br />
{{Warning|Do '''not''' append {{ic|/repos/$repo/os/$arch}} to this custom server like for other entries, as this hierarchy does not exist and therefore queries will fail.}}<br />
<br />
If looking for a more standalone solution, {{Pkg|darkhttpd}} offers a very minimal webserver. Replace the previous {{ic|python}} command with e.g.:<br />
<br />
$ sudo -u http darkhttpd /var/cache/pacman/pkg --no-server-id<br />
<br />
You could also run darkhttpd as a ''systemd'' service for convenience: see [[Systemd#Writing unit files]].<br />
<br />
If you are already running a web server for some other purpose, you might wish to reuse that as your local repository server instead. For example, if you already serve a site with [[nginx]], you can add an ''nginx'' server block listening on port 8080:<br />
<br />
{{hc|/etc/nginx/nginx.conf|<br />
server {<br />
listen 8080;<br />
root /var/cache/pacman/pkg;<br />
server_name myarchrepo.localdomain;<br />
try_files $uri $uri/;<br />
}<br />
}}<br />
<br />
Remember to [[restart]] {{ic|nginx.service}} after making this change.<br />
<br />
{{Tip|Whichever web server you use, make sure the firewall configuration (if any) allows the configured port to be reached by the desired traffic, and disallows any undesired traffic. See [[Security#Network and firewalls]].}}<br />
<br />
==== Overlay mount of read-only cache ====<br />
<br />
It is possible to use one machine on a local network as a read-only package cache by [[Overlay_filesystem|overlay mounting]] its {{ic|/var/cache/pacman/pkg}} directory. Such a configuration is advantageous if this server has installed on it a reasonably comprehensive selection of up-to-date packages which are also used by other boxes. This is useful for maintaining a number of machines at the end of a low bandwidth upstream connection.<br />
<br />
As an example, to use this method:<br />
<br />
# mkdir /tmp/remote_pkg /mnt/workdir_pkg /tmp/pacman_pkg<br />
# sshfs ''remote_username''@''remote_pkgcache_addr'':/var/cache/pacman/pkg /tmp/remote_pkg -C<br />
# mount -t overlay overlay -o index=off -o metacopy=off -o lowerdir=/tmp/remote_pkg,upperdir=/var/cache/pacman/pkg,workdir=/mnt/workdir_pkg /tmp/pacman_pkg<br />
<br />
{{Note|The working directory must be an empty directory on the same mounted device as the upper directory. See [[Overlay filesystem#Usage]].}}<br />
<br />
After this, run ''pacman'' using the option {{ic|--cachedir /tmp/pacman_pkg}}, e.g.:<br />
<br />
# pacman -Syu --cachedir /tmp/pacman_pkg<br />
<br />
==== Distributed read-only cache ====<br />
<br />
There are Arch-specific tools for automatically discovering other computers on your network offering a package cache. Try {{Pkg|pacredir}}, [[pacserve]], {{AUR|pkgdistcache}}, or {{AUR|paclan}}. pkgdistcache uses Avahi instead of plain UDP which may work better in certain home networks that route instead of bridge between WiFi and Ethernet.<br />
<br />
Historically, there was [https://bbs.archlinux.org/viewtopic.php?id=64391 PkgD] and [https://github.com/toofishes/multipkg multipkg], but they are no longer maintained.<br />
<br />
==== Read-write cache ====<br />
<br />
In order to share packages between multiple computers, simply share {{ic|/var/cache/pacman/}} using any network-based mount protocol. This section shows how to use [[SSHFS]] to share a package cache plus the related library-directories between multiple computers on the same local network. Keep in mind that a network shared cache can be slow depending on the file-system choice, among other factors.<br />
<br />
First, install any network-supporting filesystem packages: {{Pkg|sshfs}}, {{Pkg|curlftpfs}}, {{Pkg|samba}} or {{Pkg|nfs-utils}}.<br />
<br />
{{Tip|<br />
* To use ''sshfs'', consider reading [[Using SSH Keys]].<br />
* By default, ''smbfs'' does not serve filenames that contain colons, which results in the client downloading the offending package afresh. To prevent this, use the {{ic|mapchars}} mount option on the client.<br />
}}<br />
<br />
Then, to share the actual packages, mount {{ic|/var/cache/pacman/pkg}} from the server to {{ic|/var/cache/pacman/pkg}} on every client machine.<br />
<br />
{{Warning|Do not make {{ic|/var/cache/pacman/pkg}} or any of its ancestors (e.g., {{ic|/var}}) a symlink. Pacman expects these to be directories. When ''pacman'' re-installs or upgrades itself, it will remove the symlinks and create empty directories instead. However during the transaction ''pacman'' relies on some files residing there, hence breaking the update process. Refer to {{Bug|50298}} for further details.}}<br />
<br />
==== two-way with rsync ====<br />
<br />
Another approach in a local environment is [[rsync]]. Choose a server for caching and enable the [[Rsync#As a daemon]]. On clients synchronize two-way with this share via the rsync protocol. Filenames that contain colons are no problem for the rsync protocol.<br />
<br />
Draft example for a client, using {{ic|uname -m}} within the share name ensures an architecture-dependent sync:<br />
# rsync rsync://server/share_$(uname -m)/ /var/cache/pacman/pkg/ ...<br />
# pacman ...<br />
# paccache ...<br />
# rsync /var/cache/pacman/pkg/ rsync://server/share_$(uname -m)/ ...<br />
<br />
==== Dynamic reverse proxy cache using nginx ====<br />
<br />
[[nginx]] can be used to proxy package requests to official upstream mirrors and cache the results to the local disk. All subsequent requests for that package will be served directly from the local cache, minimizing the amount of internet traffic needed to update a large number of computers. <br />
<br />
In this example, the cache server will run at {{ic|<nowiki>http://cache.domain.example:8080/</nowiki>}} and store the packages in {{ic|/srv/http/pacman-cache/}}. <br />
<br />
Install [[nginx]] on the computer that is going to host the cache. Create the directory for the cache and adjust the permissions so nginx can write files to it:<br />
<br />
# mkdir /srv/http/pacman-cache<br />
# chown http:http /srv/http/pacman-cache<br />
<br />
Use the [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/c54eca4776ff162ab492117b80be4df95880d0e2/nginx.conf nginx pacman cache config] as a starting point for {{ic|/etc/nginx/nginx.conf}}. Check that the {{ic|resolver}} directive works for your needs. In the upstream server blocks, configure the {{ic|proxy_pass}} directives with addresses of official mirrors, see examples in the configuration file about the expected format. Once you are satisfied with the configuration file [[Nginx#Running|start and enable nginx]].<br />
<br />
In order to use the cache each Arch Linux computer (including the one hosting the cache) must have the following line at the top of the {{ic|mirrorlist}} file:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|<nowiki><br />
Server = http://cache.domain.example:8080/$repo/os/$arch<br />
...<br />
</nowiki>}}<br />
<br />
{{Note| You will need to create a method to clear old packages, as the cache directory will continue to grow over time. {{ic|paccache}} (which is provided by {{Pkg|pacman-contrib}}) can be used to automate this using retention criteria of your choosing. For example, {{ic|find /srv/http/pacman-cache/ -type d -exec paccache -v -r -k 2 -c {} \;}} will keep the last 2 versions of packages in your cache directory.}}<br />
<br />
==== Pacoloco proxy cache server ====<br />
<br />
[https://github.com/anatol/pacoloco Pacoloco] is an easy-to-use proxy cache server for ''pacman'' repositories. It also allows [https://github.com/anatol/pacoloco/commit/048b09956b0d8ef71c0ed1f804fd332d9ab5e3c8 automatic prefetching] of the cached packages.<br />
<br />
It can be installed as {{Pkg|pacoloco}}. Open the configuration file and add ''pacman'' mirrors:<br />
<br />
{{hc|/etc/pacoloco.yaml|<nowiki><br />
port: 9129<br />
repos:<br />
mycopy:<br />
urls:<br />
- http://mirror.lty.me/archlinux<br />
- http://mirrors.kernel.org/archlinux<br />
</nowiki>}}<br />
<br />
[[Restart]] {{ic|pacoloco.service}} and the proxy repository will be available at {{ic|http://''myserver'':9129/repo/mycopy}}.<br />
<br />
==== Flexo proxy cache server ====<br />
<br />
[https://github.com/nroi/flexo Flexo] is yet another proxy cache server for ''pacman'' repositories. Flexo is available as {{AUR|flexo-git}}. Once installed, [[start]] the {{ic|flexo.service}} unit.<br />
<br />
Flexo runs on port {{ic|7878}} by default. Enter {{ic|1=Server = http://''myserver'':7878/$repo/os/$arch}} to the top of your {{ic|/etc/pacman.d/mirrorlist}} so that ''pacman'' downloads packages via Flexo.<br />
<br />
==== Synchronize pacman package cache using synchronization programs ====<br />
<br />
Use [[Syncthing]] or [[Resilio Sync]] to synchronize the ''pacman'' cache folders (i.e. {{ic|/var/cache/pacman/pkg}}).<br />
<br />
==== Preventing unwanted cache purges ====<br />
<br />
By default, {{ic|pacman -Sc}} removes package tarballs from the cache that correspond to packages that are not installed on the machine the command was issued on. Because ''pacman'' cannot predict what packages are installed on all machines that share the cache, it will end up deleting files that should not be.<br />
<br />
To clean up the cache so that only ''outdated'' tarballs are deleted, add this entry in the {{ic|[options]}} section of {{ic|/etc/pacman.conf}}:<br />
<br />
CleanMethod = KeepCurrent<br />
<br />
=== Recreate a package from the file system ===<br />
<br />
To recreate a package from the file system, use {{AUR|fakepkg}}. Files from the system are taken as they are, hence any modifications will be present in the assembled package. Distributing the recreated package is therefore discouraged; see [[ABS]] and [[Arch Linux Archive]] for alternatives.<br />
<br />
=== List of installed packages ===<br />
<br />
Keeping a list of all explicitly installed packages can be useful to backup a system or quicken the installation of a new one:<br />
<br />
$ pacman -Qqe > pkglist.txt<br />
<br />
{{Note|<br />
* With option {{ic|-t}}, the packages already required by other explicitly installed packages are not mentioned. If reinstalling from this list they will be installed but as dependencies only.<br />
* With option {{ic|-n}}, foreign packages (e.g. from [[AUR]]) would be omitted from the list.<br />
* Use {{ic|comm -13 <(pacman -Qqdt {{!}} sort) <(pacman -Qqdtt {{!}} sort) > optdeplist.txt}} to also create a list of the installed optional dependencies which can be reinstalled with {{ic|--asdeps}}.<br />
* Use {{ic|pacman -Qqem > foreignpkglist.txt}} to create the list of AUR and other foreign packages that have been explicitly installed.}}<br />
<br />
To keep an up-to-date list of explicitly installed packages (e.g. in combination with a versioned {{ic|/etc/}}), you can set up a [[Pacman#Hooks|hook]]. Example:<br />
<br />
[Trigger]<br />
Operation = Install<br />
Operation = Remove<br />
Type = Package<br />
Target = *<br />
<br />
[Action]<br />
When = PostTransaction<br />
Exec = /bin/sh -c '/usr/bin/pacman -Qqe > /etc/pkglist.txt'<br />
<br />
=== Install packages from a list ===<br />
<br />
To install packages from a previously saved list of packages, while not reinstalling previously installed packages that are already up-to-date, run:<br />
<br />
# pacman -S --needed - < pkglist.txt<br />
<br />
However, it is likely foreign packages such as from the AUR or installed locally are present in the list. To filter out from the list the foreign packages, the previous command line can be enriched as follows:<br />
<br />
# pacman -S --needed $(comm -12 <(pacman -Slq | sort) <(sort pkglist.txt))<br />
<br />
Eventually, to make sure the installed packages of your system match the list and remove all the packages that are not mentioned in it:<br />
<br />
# pacman -Rsu $(comm -23 <(pacman -Qq | sort) <(sort pkglist.txt))<br />
<br />
{{Tip|These tasks can be automated. See {{AUR|bacpac}}, {{AUR|packup}}, {{AUR|pacmanity}}, and {{AUR|pug}} for examples.}}<br />
<br />
=== Listing all changed files from packages ===<br />
<br />
If you are suspecting file corruption (e.g. by software/hardware failure), but are unsure if files were corrupted, you might want to compare with the hash sums in the packages. This can be done with {{Pkg|pacutils}}:<br />
<br />
# paccheck --md5sum --quiet<br />
<br />
For recovery of the database see [[#Restore pacman's local database]]. The {{ic|mtree}} files can also be [[#Viewing a single file inside a .pkg file|extracted as {{ic|.MTREE}} from the respective package files]].<br />
<br />
{{Note|This should '''not''' be used as is when suspecting malicious changes! In this case security precautions such as using a live medium and an independent source for the hash sums are advised.}}<br />
<br />
=== Reinstalling all packages ===<br />
<br />
To reinstall all native packages, use:<br />
<br />
# pacman -Qqn | pacman -S -<br />
<br />
Foreign (AUR) packages must be reinstalled separately; you can list them with {{ic|pacman -Qqm}}.<br />
<br />
Pacman preserves the [[installation reason]] by default.<br />
<br />
{{Warning|To force all packages to be overwritten, use {{ic|1=--overwrite=*}}, though this should be an absolute last resort. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== Restore pacman's local database ===<br />
<br />
See [[pacman/Restore local database]].<br />
<br />
=== Recovering a USB key from existing install ===<br />
<br />
If you have Arch installed on a USB key and manage to mess it up (e.g. removing it while it is still being written to), then it is possible to re-install all the packages and hopefully get it back up and working again (assuming USB key is mounted in {{ic|/newarch}})<br />
<br />
# pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman<br />
<br />
=== Viewing a single file inside a .pkg file ===<br />
<br />
For example, if you want to see the contents of {{ic|/etc/systemd/logind.conf}} supplied within the {{Pkg|systemd}} package:<br />
<br />
$ bsdtar -xOf /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz etc/systemd/logind.conf<br />
<br />
Or you can use {{Pkg|vim}} to browse the archive:<br />
<br />
$ vim /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz<br />
<br />
=== Find applications that use libraries from older packages ===<br />
<br />
Already running processes do not automatically notice changes caused by updates. Instead, they continue using old library versions. That may be undesirable, due to potential issues related to security vulnerabilities or other bugs, and version incompatibility.<br />
<br />
Processes depending on updated libraries may be found using either {{pkg|htop}}, which highlights the names of the affected programs, or with a snippet based on {{pkg|lsof}}, which also prints the names of the libraries:<br />
<br />
# lsof +c 0 | grep -w DEL | awk '1 { print $1 ": " $NF }' | sort -u<br />
<br />
This solution will only detect files, that are normally kept opened by running processes, which basically limits it to shared libraries ({{ic|.so}} files). It may miss some dependencies, like those of Java or Python applications.<br />
<br />
=== Installing only content in required languages ===<br />
<br />
Many packages attempt to install documentation and translations in several languages. Some programs are designed to remove such unnecessary files, such as {{AUR|localepurge}}, which runs after a package is installed to delete the unneeded locale files. A more direct approach is provided through the {{ic|NoExtract}} directive in {{ic|pacman.conf}}, which prevent these files from ever being installed.<br />
<br />
{{Warning|1=Some users noted that removing locales has resulted in [[Special:Permalink/460285#Dangerous NoExtract example|unintended consequences]], even under [https://bbs.archlinux.org/viewtopic.php?id=250846 Xorg].}}<br />
<br />
The example below installs English (US) files, or none at all:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
NoExtract = usr/share/help/* !usr/share/help/C/*<br />
NoExtract = usr/share/gtk-doc/html/*<br />
NoExtract = usr/share/locale/* usr/share/X11/locale/*/* usr/share/i18n/locales/* opt/google/chrome/locales/* !usr/share/X11/locale/C/*<br />
NoExtract = !*locale*/en*/* !usr/share/*locale*/locale.*<br />
NoExtract = !usr/share/*locales/en_?? !usr/share/*locales/i18n* !usr/share/*locales/iso*<br />
NoExtract = usr/share/i18n/charmaps/* !usr/share/i18n/charmaps/UTF-8.gz<br />
NoExtract = !usr/share/*locales/trans*<br />
NoExtract = usr/share/man/* !usr/share/man/man*<br />
NoExtract = usr/share/vim/vim*/lang/*<br />
NoExtract = usr/lib/libreoffice/help/en-US/*<br />
NoExtract = usr/share/kbd/locale/*<br />
NoExtract = usr/share/*/translations/*.qm usr/share/*/nls/*.qm usr/share/qt/translations/*.pak !*/en-US.pak # Qt apps<br />
NoExtract = usr/share/*/locales/*.pak opt/*/locales/*.pak usr/lib/*/locales/*.pak !*/en-US.pak # Electron apps<br />
NoExtract = opt/onlyoffice/desktopeditors/dictionaries/* !opt/onlyoffice/desktopeditors/dictionaries/en_US/*<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/locale/* !*/en.json<br />
NoExtract = opt/onlyoffice/desktopeditors/editors/web-apps/apps/*/main/resources/help/* !*/help/en/*<br />
NoExtract = opt/onlyoffice/desktopeditors/converter/empty/*/*<br />
NoExtract = usr/share/ibus/dicts/emoji-*.dict !usr/share/ibus/dicts/emoji-en.dict<br />
}}<br />
<br />
== Performance ==<br />
<br />
=== Download speeds ===<br />
<br />
When downloading packages ''pacman'' uses the mirrors in the order they are in {{ic|/etc/pacman.d/mirrorlist}}. The mirror which is at the top of the list by default however may not be the fastest for you. To select a faster mirror, see [[Mirrors]].<br />
<br />
Pacman's speed in downloading packages can also be improved by using a different application to download packages, instead of ''pacman''<nowiki/>'s built-in file downloader, or by [[pacman#Enabling parallel downloads|enabling parallel downloads]].<br />
<br />
In all cases, make sure you have the latest ''pacman'' before doing any modifications.<br />
<br />
# pacman -Syu<br />
<br />
==== Powerpill ====<br />
<br />
[[Powerpill]] is a ''pacman'' wrapper that uses parallel and segmented downloading to try to speed up downloads for ''pacman''.<br />
<br />
==== wget ====<br />
<br />
This is also very handy if you need more powerful proxy settings than ''pacman''<nowiki/>'s built-in capabilities. <br />
<br />
To use {{ic|wget}}, first [[install]] the {{Pkg|wget}} package then modify {{ic|/etc/pacman.conf}} by uncommenting the following line in the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/wget --passive-ftp --show-progress -c -q -N %u<br />
<br />
Instead of uncommenting the {{ic|wget}} parameters in {{ic|/etc/pacman.conf}}, you can also modify the {{ic|wget}} configuration file directly (the system-wide file is {{ic|/etc/wgetrc}}, per user files are {{ic|$HOME/.wgetrc}}).<br />
<br />
==== aria2 ====<br />
<br />
[[aria2]] is a lightweight download utility with support for resumable and segmented HTTP/HTTPS and FTP downloads. aria2 allows for multiple and simultaneous HTTP/HTTPS and FTP connections to an Arch mirror, which should result in an increase in download speeds for both file and package retrieval.<br />
<br />
{{Note|Using aria2c in ''pacman''<nowiki/>'s XferCommand will '''not''' result in parallel downloads of multiple packages. Pacman invokes the XferCommand with a single package at a time and waits for it to complete before invoking the next. To download multiple packages in parallel, see [[Powerpill]].}}<br />
<br />
Install {{Pkg|aria2}}, then edit {{ic|/etc/pacman.conf}} by adding the following line to the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u<br />
<br />
{{Tip|1=[https://bbs.archlinux.org/viewtopic.php?pid=1491879#p1491879 This alternative configuration for using pacman with aria2] tries to simplify configuration and adds more configuration options.}}<br />
<br />
See {{man|1|aria2c|OPTIONS}} for used aria2c options.<br />
<br />
* {{ic|-d, --dir}}: The directory to store the downloaded file(s) as specified by ''pacman''.<br />
* {{ic|-o, --out}}: The output file name(s) of the downloaded file(s). <br />
* {{ic|%o}}: Variable which represents the local filename(s) as specified by ''pacman''.<br />
* {{ic|%u}}: Variable which represents the download URL as specified by ''pacman''.<br />
<br />
==== Other applications ====<br />
<br />
There are other downloading applications that you can use with ''pacman''. Here they are, and their associated XferCommand settings:<br />
<br />
* {{ic|snarf}}: {{ic|1=XferCommand = /usr/bin/snarf -N %u}}<br />
* {{ic|lftp}}: {{ic|1=XferCommand = /usr/bin/lftp -c pget %u}}<br />
* {{ic|axel}}: {{ic|1=XferCommand = /usr/bin/axel -n 2 -v -a -o %o %u}}<br />
* {{ic|hget}}: {{ic|1=XferCommand = /usr/bin/hget %u -n 2 -skip-tls false}} (please read the [https://github.com/huydx/hget documentation on the Github project page] for more info)<br />
* {{ic|saldl}}: {{ic|1=XferCommand = /usr/bin/saldl -c6 -l4 -s2m -o %o %u}} (please read the [https://saldl.github.io documentation on the project page] for more info)<br />
<br />
== Utilities ==<br />
<br />
* {{App|Lostfiles|Script that identifies files not owned by any package.|https://github.com/graysky2/lostfiles|{{Pkg|lostfiles}}}}<br />
* {{App|pacutils|Helper library for libalpm based programs.|https://github.com/andrewgregory/pacutils|{{Pkg|pacutils}}}}<br />
* {{App|[[pkgfile]]|Tool that finds what package owns a file.|https://github.com/falconindy/pkgfile|{{Pkg|pkgfile}}}}<br />
* {{App|pkgtools|Collection of scripts for Arch Linux packages.|https://github.com/Daenyth/pkgtools|{{AUR|pkgtools}}}}<br />
* {{App|pkgtop|Interactive package manager and resource monitor designed for the GNU/Linux.|https://github.com/orhun/pkgtop|{{AUR|pkgtop-git}}}}<br />
* {{App|[[Powerpill]]|Uses parallel and segmented downloading through [[aria2]] and [[Reflector]] to try to speed up downloads for ''pacman''.|https://xyne.dev/projects/powerpill/|{{AUR|powerpill}}}}<br />
* {{App|repoctl|Tool to help manage local repositories.|https://github.com/cassava/repoctl|{{AUR|repoctl}}}}<br />
* {{App|repose|An Arch Linux repository building tool.|https://github.com/vodik/repose|{{Pkg|repose}}}}<br />
* {{App|[[Snapper#Wrapping_pacman_transactions_in_snapshots|snap-pac]]|Make ''pacman'' automatically use snapper to create pre/post snapshots like openSUSE's YaST.|https://github.com/wesbarnett/snap-pac|{{Pkg|snap-pac}}}}<br />
* {{App|vrms-arch|A virtual Richard M. Stallman to tell you which non-free packages are installed.|https://github.com/orospakr/vrms-arch|{{AUR|vrms-arch-git}}}}<br />
<br />
=== Graphical ===<br />
<br />
{{Warning|PackageKit opens up system permissions by default, and is otherwise not recommended for general usage. See {{Bug|50459}} and {{Bug|57943}}.}}<br />
<br />
* {{App|Apper|Qt 5 application and package manager using PackageKit written in C++. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://userbase.kde.org/Apper|{{Pkg|apper}}}}<br />
* {{App|Deepin App Store|Third party app store for DDE built with DTK, using PackageKit. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://github.com/dekzi/dde-store|{{Pkg|deepin-store}}}}<br />
* {{App|Discover|Qt 5 application manager using PackageKit written in C++/QML. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://userbase.kde.org/Discover|{{Pkg|discover}}}}<br />
* {{App|GNOME PackageKit|GTK 3 package manager using PackageKit written in C.|https://freedesktop.org/software/PackageKit/|{{Pkg|gnome-packagekit}}}}<br />
* {{App|GNOME Software|GTK 3 application manager using PackageKit written in C. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://wiki.gnome.org/Apps/Software|{{Pkg|gnome-software}}}}<br />
* {{App|pcurses|Curses TUI ''pacman'' wrapper written in C++.|https://github.com/schuay/pcurses|{{Pkg|pcurses}}}}<br />
* {{App|tkPacman|Tk pacman wrapper written in Tcl.|https://sourceforge.net/projects/tkpacman|{{AUR|tkpacman}}}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=723533User:Cmsigler/Wireguard Configuration Guide2022-03-19T12:58:25Z<p>Cmsigler: Fix typo in IPv6 allow-all address specification</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
<u>Note</u>:<br />
* These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV<br />
* For information on WireGuard under Arch, see the [[WireGuard|Arch Linux WireGuard page]].<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=yes<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::/0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=yes<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=yes<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=yes<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=723109User:Cmsigler/Wireguard Configuration Guide2022-03-16T11:21:55Z<p>Cmsigler: Change boolean value =true to =yes in .network files</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
<u>Note</u>:<br />
* These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV<br />
* For information on WireGuard under Arch, see the [[WireGuard|Arch Linux WireGuard page]].<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=yes<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=yes<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=yes<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=yes<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Arch_Linux_on_a_VPS&diff=723077Arch Linux on a VPS2022-03-15T14:12:55Z<p>Cmsigler: Add link to Servercheap OS page which lists Arch</p>
<hr />
<div>[[Category:Installation process]]<br />
[[Category:Virtualization]]<br />
[[ja:Arch Linux VPS]]<br />
{{Related articles start}}<br />
{{Related|Server}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Virtual private server]]:<br />
<br />
:Virtual private server (VPS) is a term used by Internet hosting services to refer to a virtual machine. The term is used for emphasizing that the virtual machine, although running in software on the same physical computer as other customers' virtual machines, is in many respects functionally equivalent to a separate physical computer, is dedicated to the individual customer's needs, has the privacy of a separate physical computer, and can be configured to run server software.<br />
<br />
This article discusses the use of Arch Linux on Virtual Private Servers, and includes some fixes and installation instructions specific to VPSes.<br />
<br />
== Official Arch Linux cloud image ==<br />
<br />
Arch Linux provides an official cloud image as part of the [https://gitlab.archlinux.org/archlinux/arch-boxes arch-boxes project]. The image comes with [[Cloud-init]] preinstalled and should work with most cloud providers.<br />
<br />
The image can be downloaded from the mirrors under the {{ic|images}} directory. Instructions for tested providers is listed below:<br />
<br />
{| class="wikitable"<br />
! Provider !! Locations !! Note<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || Global ||<br />
# Find the cloud image on a mirror, ex: <nowiki>https://mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg-</nowiki>''yyyymmdd.12345''<nowiki>.qcow2</nowiki> (check here for the latest link: https://mirror.pkgbuild.com/images/latest/)<br />
# Add the image as a custom image by [https://www.digitalocean.com/docs/images/custom-images/quickstart/#upload-images importing it]<br />
# [https://www.digitalocean.com/docs/images/custom-images/quickstart/#create-droplets-from-custom-images Create a new VM from the custom image]<br />
# SSH to the VM: {{ic|ssh root@<ip>}}<br />
|-<br />
| [https://www.hetzner.com/cloud Hetzner Cloud] || Nuremberg, Falkenstein (Germany), Helsinki (Finland) ||<br />
# Create a new VM with this user data:{{bc|#cloud-config<br><nowiki>vendor_data: {'enabled': false}</nowiki>}}<sup>The {{ic|vendor_data}} from Hetzner overrides the {{ic|distro}} and sets the default user to {{ic|root}} without setting {{ic|disable_root: false}}, meaning you can not login</sup><br />
# Boot the VM in rescue mode<br />
# SSH to the VM and download the cloud image from a mirror, ex: {{ic|curl -O <nowiki>https://mirror.pkgbuild.com/images/v</nowiki>''yyyymmdd.12345''<nowiki>/Arch-Linux-x86_64-cloudimg-</nowiki>''yyyymmdd.12345''<nowiki>.qcow2</nowiki>}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-cloudimg-''yyymmdd.12345''.qcow2 /dev/sda}}<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@<ip>}}<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/global-infrastructure/ Multiple international locations] ||<br />
# Create a new VM and select Arch as the distribution (to use the Linode-provided image, stop here; otherwise proceed with the rest of the steps)<br />
# Boot the VM in rescue mode<br />
# Connect to the VM via the Lish console and download the basic image from a mirror, ex: {{ic|curl -O <nowiki>https://mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-basic-</nowiki>''yyyymmdd.0''<nowiki>.qcow2</nowiki>}}<br />
# Install the qemu-utils package: {{ic|apt update && apt install qemu-utils}}<br />
# Write the image to the disk: {{ic|qemu-img convert -f qcow2 -O raw Arch-Linux-x86_64-basic-''yyyymmdd.0''.qcow2 /dev/sda}}<br />
# In the Linode manager, go to the VM's configurations menu and edit the configuration to change the kernel option to "Direct Disk"<br />
# Reboot the VM<br />
# SSH to the VM: {{ic|ssh arch@<ip>}}<br />
|-<br />
| [https://www.proxmox.com/ Proxmox] || N/A ||<br />
# Create a new VM<br />
# Select "Do not use any media" in OS section.<br />
# Remove created hard disk from your VM after VM creation completes.<br />
# Add the downloaded image to your VM using {{ic|qm importdisk}}, ex:<br> {{ic|qm importdisk 100 Arch-Linux-x86_64-cloudimg-20210315.17387.qcow2 local}}<br />
# Add a cloudinit drive and make your configurations in Cloud-Init section.<br />
# Start the VM!<br />
|-<br />
|}<br />
<br />
== Providers that offer Arch Linux ==<br />
<br />
{{Style|Inconsistency, some language issues}}<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|This list is for providers where Arch Linux can be installed in a supported way. This excludes any container-based hosting such as LXC or Docker as well as OpenVZ.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Archiso release !! Virtualization !! Locations !! Notes<br />
|-<br />
| [https://hetzner.com/cloud Hetzner] || 2020.06.01 || KVM || Nuremberg, DE; Falkenstein, DE; Helsinki, FI || You cannot choose Arch Linux directly on the order form. Order Ubuntu or something first, then go to ISO Images, mount Arch Linux, reboot server, and log in to web console to complete installation.<br />
|-<br />
| [https://www.linode.com Linode] || [https://www.linode.com/distributions Latest] || KVM || [https://www.linode.com/global-infrastructure/ Multiple international locations] || Linode instances are configured to run Arch's kernel by default. Linode provides custom kernels which can be selected in the manager settings. There are also community-supported kernels in the AUR, such as {{AUR|linux-linode}}.<br />
|-<br />
| [https://www.netcup.eu/ Netcup] || 2020.09.01 || KVM || Germany (DE) || German language: [https://www.netcup.de/ Netcup]<br />
|-<br />
| [https://monovm.com MonoVM ] || Latest || VMware || USA - Canada - Netherlands - Germany - UK - France - Denmark || VMware Based VPS Server Provider. <br />
|-<br />
| [https://www.ramnode.com/ RamNode] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=48 2016.01.01] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=39 SSD and SSD Cached:] [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=52 KVM] || [https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=50 Alblasserdam, NL; Atlanta, GA-US; Los Angeles, CA-US; New York, NY-US; Seattle, WA-US] || You can request Host/CPU passthrough with KVM service.[https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=66] Frequent use of discount promotions.[https://twitter.com/search?q=ramnode%20code&src=typd], Must install Arch manually from an ISO using VNC viewer.<br />
|-<br />
| [https://www.servercheap.net Server Cheap] || Latest || KVM || Chicago, Illinois, USA || Arch Linux available on request.[https://servercheap.net/operating-systems.php] Windows, BSD, and many Linux distribution hosting options. <br />
|-<br />
| [https://www.transip.eu/ TransIP] || latest || [https://www.transip.eu/vps/vps-technology/ KVM] || Amsterdam, NL || For latest image, submit ticket. Also registrar.<br />
|-<br />
| [https://www.vultr.com/ Vultr] || Latest || KVM || [https://www.vultr.com/locations/ Multiple International locations] || When deploying a new server just select the Arch install ISO from Vultr ISO Library. Then just manually run through the standard [[Installation guide|Arch installation guide]].<br />
|-<br />
| [https://www.misaka.io/ Misaka.io / zeptoVM] || Latest || KVM || [https://www.misaka.io/services/mc2 Multiple International locations] || Images are built every 24 hrs<br />
|-<br />
|}<br />
<br />
== Providers with Community provided Arch Linux support ==<br />
<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|Arch Linux is not officially supported by these providers. The images and scripts listed here are created by the community.}}<br />
<br />
{| class="wikitable"<br />
! Provider !! Installation Type !! Locations !! Notes<br />
|- <br />
| [https://aws.amazon.com/ Amazon Web Services] || [[Arch Linux AMIs for Amazon Web Services|Custom Images]] || Global ||<br />
|-<br />
| [https://digitalocean.com Digital Ocean] || [https://gitlab.archlinux.org/archlinux/arch-boxes#cloud-image Official Arch cloud image], [https://github.com/gh2o/digitalocean-debian-to-arch Conversion Script] or [https://github.com/robsonde/digitalocean_builder Custom Image] || Global || IPv6 does not work with custom images, but works with conversion script<br />
|-<br />
| [https://cloud.google.com/ Google Cloud Platform] || [https://github.com/GoogleCloudPlatform/compute-archlinux-image-builder Custom Image] || Global || <br />
|-<br />
|}<br />
<br />
== Installation ==<br />
<br />
See [[QEMU#Preparing an Arch Linux guest]] for KVM.<br />
<br />
Xen HVM might also work the same way.<br />
<br />
=== OpenVZ ===<br />
<br />
==== Installing the latest Arch Linux on any OpenVZ container provider ====<br />
<br />
{{Warning|Please refer to the warning about the older kernel version and systemd at the top of the page, and note the [[#Preparing the Arch build for use on an OpenVZ 7 container|workaround for OpenVZ 7 below]].}}<br />
<br />
It is possible to directly copy an installation of Arch Linux over the top of a working OpenVZ VPS. This tutorial explains how to create a basic installation of Arch Linux with {{ic|pacstrap}} (as used in a standard install) and then replace the contents of a target VPS with it using [[rsync]].<br />
<br />
This process (with minor modification) also works to migrate existing Arch installations between various environments and has been confirmed to work in migrating from OpenVZ to Xen and from Xen to OpenVZ. For an install to Xen, other hardware-virtualized platforms, or even to physical hardware, extra steps (basically running {{ic|mkinitcpio}} and installing a [[boot loader]]) are needed.<br />
<br />
===== Prerequisites =====<br />
<br />
* A working Arch Linux installation<br />
** To build from other distributions, [[Archbootstrap|arch-bootstrap.sh]] can be used in place of {{ic|pacstrap}}.<br />
* The {{Pkg|arch-install-scripts}}, {{Pkg|rsync}}, and {{Pkg|openssh}} packages from the [[official repositories]]<br />
** SSH is not strictly required, but rsync over SSH is the method used here.<br />
* A VPS running any distribution, with {{ic|rsync}} and a working SSH server<br />
* OpenVZ's serial console feature (usually accessible via your provider's control panel)<br />
** Without this, any network configuration for the target VPS will have to be done immediately after the "Build" step below.<br />
<br />
===== Building a clean Arch Linux installation =====<br />
<br />
As root, build the installation (optionally replacing {{ic|build}} with your preferred target directory):<br />
<br />
# mkdir build<br />
# pacstrap -cd build<br />
<br />
Other tweaks for the {{ic|pacstrap}} command:<br />
<br />
*{{ic|-C custom-pacman-config.conf}} - Use a custom pacman configuration file. By default, {{ic|pacstrap}} builds according to your local pacman.conf.<br />
*{{ic|-G}} - Prevent {{ic|pacstrap}} from copying your system's pacman keyring to the new build. If you use this option, you will need to run {{ic|pacman-key --init}} and {{ic|pacman-key --populate archlinux}} in the [[#Configuration|Configuration]] step to set up the keyring.<br />
*{{ic|-M}} - Prevent {{ic|pacstrap}} from copying your system's pacman mirror list to the new build.<br />
*You can pass a list of packages to {{ic|pacstrap}} to add them to your install, instead of the default {{ic|base}} group. For example: {{ic|pacstrap -cd build base openssh dnsutils gnu-netcat traceroute vim}}<br />
<br />
====== Preparing the Arch build for use on an OpenVZ 7 container ======<br />
<br />
OpenVZ 7 will fail to start a container if some expected network configuration files do not exist. The easiest way to get around this is as follows:<br />
<br />
# Create the OpenVZ 7 container as Debian 8 (Debian 9 would probably work as well).<br />
# Create the required blank network configuration files inside the Arch build, as follows:<br />
# mkdir build/etc/network<br />
# touch build/etc/network/interfaces<br />
# mkdir -p build/etc/resolvconf/resolv.conf.d<br />
# touch build/etc/resolvconf/resolv.conf.d/base<br />
<br />
===== Replacing everything on the VPS with the Arch build =====<br />
<br />
Replace all files, directories, etc. on your target VPS with the contents of your {{ic|build}} directory (replacing "YOUR.VPS.IP.ADDRESS" below):<br />
<br />
{{Warning|Be careful with the following command. By design, {{ic|rsync}} is very destructive, especially with any of the {{ic|--delete}} options.}}<br />
<br />
# rsync -axH --numeric-ids --delete-delay -e ssh --stats -P build/ YOUR.VPS.IP.ADDRESS:/<br />
<br />
Explanation of options:<br />
<br />
* {{ic|-a}} - Required. Preserves timestamps, permissions, etc.<br />
* {{ic|--delete}} - Required. Deletes anything in the target that does not exist in the source<br />
* {{ic|-x}} - Important. Prevents the crossing of filesystem boundaries (other partitions, /dev, etc.) during the copy<br />
* {{ic|-H}} - Important. Preserves hardlinks<br />
* {{ic|--numeric-ids}} - Important. Does not assign user/group ownership of files based on matching user and group names and instead uses the numeric IDs directly, ensuring proper file ownership on the target system<br />
* {{ic|--delete-delay}} - Recommended. Enables alternate deletion mode which waits to delete anything until the synchronization is otherwise complete, which may reduce the risk of a slow transfer causing the target VPS to lock-up<br />
* {{ic|-e ssh}} - Recommended. Uses {{ic|rsync}} over SSH (recommended for simplicity compared to setting up an {{ic|rsync}} server)<br />
* {{ic|-P}} - Recommended. Shows partial progress information during transfer<br />
* {{ic|--stats}} - Recommended. Shows transfer statistics at the end<br />
<br />
===== Configuration =====<br />
<br />
# Reboot the VPS externally (using your provider's control panel, for example).<br />
# Using OpenVZ's serial console feature, configure the [[network]] and [[Installation_guide#Configure_the_system|basic system settings]] (ignoring fstab generation and arch-chroot steps).<br />
#* If you do not have access to the serial console feature, you will need to preconfigure your network settings before synchronizing Arch to the VPS.<br />
#* On some VPS configuration you will not have a gateway to connect to, here is an example [[netctl]] configuration for this setup. It configures static IP addresses and default routes on venet0 and uses Google Public DNS.<br />
{{hc|/etc/netctl/venet|2=<br />
Description='VPS venet connection'<br />
Interface=venet0<br />
Connection=ethernet<br />
<br />
IP=static<br />
Address=('192.0.2.42/32')<br />
Routes=('default')<br />
<br />
IP6=static<br />
Address6=('2001:db8::1234:5678/128')<br />
Routes6=('default')<br />
<br />
DNS=('2001:4860:4860::8888' '2001:4860:4860::8844' '8.8.8.8' '8.8.4.4')<br />
}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=723008User:Cmsigler/Wireguard Configuration Guide2022-03-15T00:27:08Z<p>Cmsigler: Add link to Arch Wiki WireGuard page and reformat at head of page</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
<u>Note</u>:<br />
* These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV<br />
* For information on WireGuard under Arch, see the [[WireGuard|Arch Linux WireGuard page]].<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=true<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=true<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=true<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=true<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=722985User:Cmsigler/Wireguard Configuration Guide2022-03-14T17:26:52Z<p>Cmsigler: Edit header text</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
Note: These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=true<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=true<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=true<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=true<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of VPN connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/Wireguard_Configuration_Guide&diff=722984User:Cmsigler/Wireguard Configuration Guide2022-03-14T17:24:20Z<p>Cmsigler: Initial commit</p>
<hr />
<div>My Personal Step-by-step Guide to Wireguard Setup, Configuration and Operation<br />
<br />
CMS, 2022/03/14<br />
<br />
Note: These procedures have been developed and deployed on an Arch Linux installation. Other distributions and environments will require modifications to the steps below. YMMV<br />
<br />
== Nomenclature ==<br />
<br />
* Gateway peer: Wireguard "server" peer connected to public Internet<br />
* VPN peer: Wireguard "client" peer; may be located behind, e.g., a NAT router<br />
<br />
== Initial Setup ==<br />
<br />
=== Requirements ===<br />
<br />
* Install and use kernel with CONFIG_WIREGUARD<br />
* Install wireguard-tools<br />
<br />
=== Pre-configuration ===<br />
<br />
* Generate keys for each peer [gateway = Gateway peer; vpn = VPN peer]<br />
<br />
$ cd ~/wireguard_config<br />
$ (umask 0077; wg genkey > gateway.key)<br />
$ wg pubkey < gateway.key > gateway.pub<br />
$ (umask 0077; wg genkey > vpn.key)<br />
$ wg pubkey < vpn.key > vpn.pub<br />
<br />
* Optional: Generate pre-shared keys for each peer-to-peer link pair<br />
<br />
$ (umask 0077; wg genpsk > gateway-vpn.psk)<br />
<br />
* Optional: On gateway peer, set up DNS server for wireguard peers using dnsmasq as server<br />
** Install dnsmasq<br />
** Edit /etc/dnsmasq.conf<br />
*** Uncomment domain-needed, bogus-priv, bind-interfaces<br />
*** Set "interface=wg0"<br />
*** Set "listen-address=::1,127.0.0.1,10.0.0.1,fd89:abc1:def2:1::1"<br />
*** Optional: Set "cache-size=1000"<br />
<br />
== Configuration for operation via wg-quick ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== Wireguard configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.1/24, fd89:abc1:def2:1::1/64<br />
ListenPort = 51871<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 10.0.0.2/32, fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/wireguard/wg0.conf|<nowiki>[Interface]<br />
Address = 10.0.0.2/32, fd89:abc1:def2:1::2/128<br />
ListenPort = 51902<br />
PrivateKey = # GATEWAY_PEER_PRIVATE_KEY<br />
DNS = 10.0.0.1<br />
<br />
[Peer]<br />
PublicKey = # VPN_PEER_PUBLIC_KEY<br />
PresharedKey = # GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs = 0.0.0.0/0, ::0<br />
Endpoint = 198.51.100.49:51871</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On Gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Address translation (NAT) filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-nat {<br />
type nat hook postrouting priority srcnat<br />
policy accept<br />
# NAT/masquerade all traffic coming from VPN interface, and count<br />
iifname $vpn-if oif $upstream-if meta protocol ip counter masquerade<br />
}<br />
}</nowiki>}}<br />
<br />
=== Packet forwarding configuration ===<br />
<br />
On Gateway peer:<br />
<br />
* sysctl configuration<br />
{{hc|/etc/sysctl.d/30-ipv4_forward.conf|<nowiki>net.ipv4.ip_forward=1<br />
net.ipv4.conf.default.forwarding=1<br />
net.ipv4.conf.all.forwarding=1<br />
net.ipv4.conf.ens3.forwarding=1<br />
net.ipv4.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
{{hc|/etc/sysctl.d/30-ipv6_forward.conf|<nowiki>net.ipv6.conf.default.accept_ra = 2<br />
net.ipv6.conf.all.accept_ra = 2<br />
net.ipv6.conf.ens3.accept_ra = 2<br />
net.ipv6.conf.wg0.accept_ra = 2<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1<br />
net.ipv6.conf.ens3.forwarding=1<br />
net.ipv6.conf.wg0.forwarding=1</nowiki>}}<br />
<br />
== Configuration for operation via systemd-networkd ==<br />
<br />
Example -- Wireguard VPN gateway:<br />
<br />
=== /etc/systemd/network configuration ===<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=#GATEWAY_PEER_PRIVATE_KEY<br />
<br />
[WireGuardPeer]<br />
PublicKey=#VPN_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fd89:abc1:def2:1::2/128</nowiki>}}<br />
<br />
On gateway peer:<br />
{{hc|/etc/systemd/network/99-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fd89:abc1:def2:1::1/64<br />
IPForward=true<br />
IPMasquerade=ipv4<br />
# or<br />
#IPMasquerade=both</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/99-wg0.netdev|<nowiki>[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=#VPN_PEER_PRIVATE_KEY<br />
FirewallMark=0x89ab<br />
<br />
[WireGuardPeer]<br />
PublicKey=#GATEWAY_PEER_PUBLIC_KEY<br />
PresharedKey=#GATEWAY_PEER-VPN_PEER-PRESHARED_KEY<br />
AllowedIPs=0.0.0.0/0<br />
AllowedIPs=::0<br />
Endpoint=198.51.100.49:51871</nowiki>}}<br />
<br />
On VPN peer:<br />
{{hc|/etc/systemd/network/50-wg0.network|<nowiki>[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/32<br />
Address=fd89:abc1:def2:1::2/128<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=true<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x89ab<br />
InvertRule=true<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=true<br />
Table=1000</nowiki>}}<br />
<br />
=== Firewall/filtering configuration (using nftables) ===<br />
<br />
On gateway peer:<br />
<br />
* Input filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
define mgmt-host = 203.0.113.51<br />
define ssh-port = 22<br />
define vpn-port = 51871<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-input {<br />
type filter hook input priority filter<br />
policy drop<br />
# Accept localhost traffic<br />
iif lo accept<br />
# Bad TCP --> reject network scanning<br />
iif $upstream-if tcp flags & (fin|syn) == (fin|syn) counter drop<br />
iif $upstream-if tcp flags & (syn|rst) == (syn|rst) counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == 0 counter drop<br />
iif $upstream-if tcp flags & (fin|syn|rst|psh|ack|urg) == (fin|psh|urg) counter drop<br />
# Accept Wireguard inbound UDP traffic from peer to VPN port<br />
iif $upstream-if udp dport $vpn-port accept<br />
# Accept ICMP from Wireguard peers<br />
iifname $vpn-if ip protocol icmp limit rate 5/second accept<br />
iifname $vpn-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Allow DNS from Wireguard peers<br />
iifname $vpn-if udp dport 53 accept<br />
iifname $vpn-if tcp dport 53 accept<br />
# Remaining input from VPN interface to VPN server (local) prohibited<br />
# -- default policy drop<br />
iifname $vpn-if counter drop<br />
# Drop invalid (untracked?) packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Allow connection from given mgmt host on given ssh port<br />
iif $upstream-if ip saddr $mgmt-host tcp dport $ssh-port accept<br />
# Limit ICMP packets accepted<br />
iif $upstream-if ip protocol icmp limit rate 5/second accept<br />
iif $upstream-if meta l4proto ipv6-icmp limit rate 5/second accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
* Forward filter<br />
{{bc|<nowiki>define upstream-if = ens0<br />
define vpn-if = wg0<br />
#<br />
table inet inet-local-table {<br />
chain inet-local-forward {<br />
type filter hook forward priority filter<br />
policy drop<br />
# Drop IP forward for upstream invalid packets<br />
iif $upstream-if ct state invalid counter drop<br />
# Accept IP forward for upstream established and related tracked connections<br />
iif $upstream-if ct state {established, related} accept<br />
# Accept all VPN traffic to be forwarded upstream<br />
iifname $vpn-if oif $upstream-if accept<br />
# Count traffic dropped by default policy<br />
counter drop<br />
}<br />
}</nowiki>}}<br />
<br />
== Operation of Wireguard link for VPN ==<br />
<br />
=== Manual operation via wg-quick ===<br />
<br />
Bring up wg0 interface<br />
<br />
$ sudo wg-quick up wg0<br />
<br />
=== systemd operation via wg-quick ===<br />
<br />
Start wg-quick@wg0 service; enable for operation upon reboot<br />
<br />
$ sudo systemctl start wg-quick\@wg0<br />
$ sudo systemctl enable wg-quick\@wg0<br />
<br />
=== systemd-networkd operation ===<br />
<br />
* Enable and start systemd-resolved on VPN peer (required by "DNS=10.0.0.1" line under [Network] section)<br />
* Restart systemd-networkd<br />
<br />
On gateway peer:<br />
<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
On VPN peer:<br />
<br />
$ sudo systemctl start systemd-resolved<br />
$ sudo systemctl enable systemd-resolved<br />
$ sudo systemctl restart systemd-networkd<br />
<br />
== Testing of connection and operation ==<br />
<br />
=== Read wireguard comm status on gateway and VPN peer(s) ===<br />
<br />
$ sudo wg<br />
<br />
=== Ping peer(s) ===<br />
<br />
On VPN peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.1<br />
<br />
On gateway peer:<br />
<br />
$ ping -4 -n -c 5 10.0.0.2<br />
<br />
=== Optional: Persistent keepalive ===<br />
<br />
If ping on gateway peer to VPN peer fails, configure devices located behind, e.g., a NAT router for persistent keepalive:<br />
* wg-quick: Add, e.g., "PersistentKeepalive = 15" to [Peer] section of /etc/wireguard/wg0.conf<br />
* systemd-networkd: Add, e.g., "PersistentKeepalive=15" to [WireGuardPeer] section of /etc/systemd/network/99-wg0.netdev<br />
<br />
=== Read packet filter counters ===<br />
<br />
$ sudo nft list ruleset | grep counter<br />
<br />
=== Read packet filter logging ===<br />
<br />
$ journalctl</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler&diff=722982User:Cmsigler2022-03-14T15:47:12Z<p>Cmsigler: Add links to new sub-articles</p>
<hr />
<div>View [[User:Cmsigler/RISC-V]]<br />
<br />
View [[User:Cmsigler/Wireguard Configuration Guide]]<br />
<br />
View [[User:Cmsigler/Nspawn Configuration Guide]]<br />
<br />
View [[User:Cmsigler/personal editing sandbox]]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Systemd-nspawn&diff=721275Systemd-nspawn2022-03-04T13:39:13Z<p>Cmsigler: systemd-nspawn(1) man page no longer documents --private-users-chown as usage has changed; update to agree with man page</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Virtualization]]<br />
[[Category:Sandboxing]]<br />
[[es:Systemd-nspawn]]<br />
[[ja:Systemd-nspawn]]<br />
[[zh-hans:Systemd-nspawn]]<br />
{{Related articles start}}<br />
{{Related|systemd}}<br />
{{Related|Linux Containers}}<br />
{{Related|systemd-networkd}}<br />
{{Related|Docker}}<br />
{{Related articles end}}<br />
<br />
''systemd-nspawn'' is like the [[chroot]] command, but it is a ''chroot on steroids''.<br />
<br />
''systemd-nspawn'' may be used to run a command or OS in a light-weight namespace container. It is more powerful than [[chroot]] since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name.<br />
<br />
''systemd-nspawn'' limits access to various kernel interfaces in the container to read-only, such as {{ic|/sys}}, {{ic|/proc/sys}} or {{ic|/sys/fs/selinux}}. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container.<br />
<br />
''systemd-nspawn'' is a simpler tool to configure than [[LXC]] or [[Libvirt]].<br />
<br />
== Installation ==<br />
<br />
''systemd-nspawn'' is part of and packaged with {{Pkg|systemd}}.<br />
<br />
== Examples ==<br />
<br />
=== Create and boot a minimal Arch Linux container ===<br />
<br />
First install {{Pkg|arch-install-scripts}}.<br />
<br />
Next, create a directory to hold the container. In this example we will use {{ic|~/MyContainer}}. <br />
<br />
Next, we use ''pacstrap'' to install a basic Arch system into the container. At minimum we need to install the {{Pkg|base}} package. <br />
<br />
# pacstrap -c ~/MyContainer base ''[additional packages/groups]''<br />
<br />
{{Tip|The {{Pkg|base}} package does not depend on the {{Pkg|linux}} kernel package and is container-ready.}}<br />
<br />
Once your installation is finished, chroot into the container, and set a root password:<br />
<br />
# systemd-nspawn -D ~/MyContainer<br />
# passwd<br />
# logout<br />
<br />
Finally, boot into the container:<br />
<br />
# systemd-nspawn -b -D ~/MyContainer<br />
<br />
The {{ic|-b}} option will boot the container (i.e. run {{ic|systemd}} as PID=1), instead of just running a shell, and {{ic|-D}} specifies the directory that becomes the container's root directory.<br />
<br />
After the container starts, log in as "root" with your password.<br />
<br />
{{Note|If the login fails with "Login incorrect", the problem is likely the {{ic|securetty}} TTY device whitelist. See [[#Root login fails]].}}<br />
<br />
The container can be powered off by running {{ic|poweroff}} from within the container. From the host, containers can be controlled by the [[#machinectl|machinectl]] tool.<br />
<br />
{{Note|To terminate the ''session'' from within the container, hold {{ic|Ctrl}} and rapidly press {{ic|]}} three times. Non-US keyboard users should use {{ic|%}} instead of {{ic|]}}.}}<br />
<br />
=== Create a Debian or Ubuntu environment ===<br />
<br />
Install {{Pkg|debootstrap}}, and one or both of {{Pkg|debian-archive-keyring}} and {{Pkg|ubuntu-keyring}} depending on which distribution you want.<br />
<br />
{{Note|''systemd-nspawn'' requires that the operating system in the container uses ''systemd'' init (has it running as PID 1) and ''systemd-nspawn'' is installed in the container. Make sure that the ''systemd-container'' package is installed on the container system.}}<br />
<br />
From there it is rather easy to set up Debian or Ubuntu environments:<br />
<br />
# cd /var/lib/machines<br />
# debootstrap --include=systemd-container --components=main,universe ''codename'' ''container-name'' ''repository-url''<br />
<br />
For Debian valid code names are either the rolling names like "stable" and "testing" or release names like "stretch" and "sid", for Ubuntu the code name like "xenial" or "zesty" should be used. A complete list of code names is in {{ic|/usr/share/debootstrap/scripts}} and the official table of code names to version numbers can be found in [https://wiki.ubuntu.com/DevelopmentCodeNames#Release_Naming_Scheme]. In case of a Debian image the "repository-url" can be https://deb.debian.org/debian/. For an Ubuntu image, the "repository-url" can be http://archive.ubuntu.com/ubuntu/. "repository-url" should ''not'' contain a trailing slash.<br />
<br />
Just like Arch, Debian and Ubuntu will not let you log in without a password. To set the root password, run ''systemd-nspawn'' without the {{ic|-b}} option:<br />
<br />
# cd /var/lib/machines<br />
# systemd-nspawn -D ./''container-name''<br />
# passwd<br />
# logout<br />
<br />
=== Build and test packages ===<br />
<br />
See [[Creating packages for other distributions]] for example uses.<br />
<br />
== Management ==<br />
<br />
Containers located in {{ic|/var/lib/machines/}} can be controlled by the ''machinectl'' command, which internally controls instances of the {{ic|systemd-nspawn@.service}} unit. The subdirectories in {{ic|/var/lib/machines/}} correspond to the container names, i.e. {{ic|/var/lib/machines/''container-name''/}}.<br />
<br />
{{Note|If the container cannot be moved into {{ic|/var/lib/machines/}} for some reason, it can be symlinked. See {{man|1|machinectl|FILES AND DIRECTORIES}} for details.}}<br />
<br />
=== Default systemd-nspawn options ===<br />
<br />
It is important to realize that containers started via ''machinectl'' or {{ic|systemd-nspawn@.service}} use different default options than containers started manually by the ''systemd-nspawn'' command. The extra options used by the service are:<br />
<br />
* {{ic|-b}}/{{ic|--boot}} – Managed containers automatically search for an init program and invoke it as PID 1.<br />
* {{ic|--network-veth}} which implies {{ic|--private-network}} – Managed containers get a virtual network interface and are disconnected from the host network. See [[#Networking]] for details.<br />
* {{ic|-U}} – Managed containers use the {{man|7|user_namespaces}} feature by default if supported by the kernel. See [[#Unprivileged containers]] for implications.<br />
* {{ic|1=--link-journal=try-guest}}<br />
<br />
The behaviour can be overridden in per-container configuration files, see [[#Configuration]] for details.<br />
<br />
=== machinectl ===<br />
<br />
{{Note|The ''machinectl'' tool requires [[systemd]] and {{Pkg|dbus}} to be installed in the container. See [https://github.com/systemd/systemd/issues/685] for detailed discussion.}}<br />
<br />
Containers can be managed by the {{ic|machinectl ''subcommand'' ''container-name''}} command. For example, to start a container:<br />
<br />
$ machinectl start ''container-name''<br />
<br />
Similarly, there are subcommands such as {{ic|poweroff}}, {{ic|reboot}}, {{ic|status}} and {{ic|show}}. See {{man|1|machinectl|Machine Commands}} for detailed explanations.<br />
<br />
{{Tip|Poweroff and reboot operations can be performed from within the container using the {{ic|poweroff}} and {{ic|reboot}} commands.}}<br />
<br />
Other common commands are:<br />
<br />
* {{ic|machinectl list}} – show a list of currently running containers<br />
* {{ic|machinectl login ''container-name''}} – open an interactive login session in a container<br />
* {{ic|machinectl shell ''[username@]container-name''}} – open an interactive shell session in a container (this immediately invokes a user process without going through the login process in the container)<br />
* {{ic|machinectl enable ''container-name''}} and {{ic|machinectl disable ''container-name''}} – enable or disable a container to start at boot, see [[#Enable container to start at boot]] for details<br />
<br />
''machinectl'' also has subcommands for managing container (or virtual machine) images and image transfers. See {{man|1|machinectl|Image Commands}} and {{man|1|machinectl|Image Transfer Commands}} for details.<br />
<br />
{{Expansion|Add some explicit examples how to use the image transfer commands. Most importantly, where to find suitable images.}}<br />
<br />
=== systemd toolchain ===<br />
<br />
Much of the core systemd toolchain has been updated to work with containers. Tools that do usually provide a {{ic|1=-M, --machine=}} option which will take a container name as argument.<br />
<br />
Examples:<br />
<br />
See journal logs for a particular machine:<br />
<br />
# journalctl -M ''container-name''<br />
<br />
Show control group contents:<br />
<br />
$ systemd-cgls -M ''container-name''<br />
<br />
See startup time of container:<br />
<br />
$ systemd-analyze -M ''container-name''<br />
<br />
For an overview of resource usage:<br />
<br />
$ systemd-cgtop<br />
<br />
== Configuration ==<br />
<br />
=== Per-container settings ===<br />
<br />
To specify per-container settings and not global overrides, the ''.nspawn'' files can be used. See {{man|5|systemd.nspawn}} for details.<br />
<br />
{{Note|<br />
* ''.nspawn'' files may be removed unexpectedly from {{ic|/etc/systemd/nspawn/}} when you run {{ic|machinectl remove}}. [https://github.com/systemd/systemd/issues/15900]<br />
* The interaction of network options specified in the ''.nspawn'' file and on the command line does not work correctly when there is {{ic|1=--settings=override}} (which is specified in the {{ic|systemd-nspawn@.service}} file). [https://github.com/systemd/systemd/issues/12313#issuecomment-681116926] As a workaround, you need to include the option {{ic|1=VirtualEthernet=on}}, even though the service specifies {{ic|1=--network-veth}}.<br />
}}<br />
<br />
=== Enable container to start at boot ===<br />
<br />
When using a container frequently, you may want to start it at boot.<br />
<br />
First make sure that the {{ic|machines.target}} is [[enabled]].<br />
<br />
Containers discoverable by [[#machinectl|machinectl]] can be enabled or disabled:<br />
<br />
$ machinectl enable ''container-name''<br />
<br />
{{Note|<br />
* This has the effect of enabling the {{ic|systemd-nspawn@''container-name''.service}} systemd unit.<br />
* As mentioned in [[#Default systemd-nspawn options]], containers started by ''machinectl'' get a virtual Ethernet interface. To disable private networking, see [[#Use host networking]].<br />
}}<br />
<br />
=== Resource control ===<br />
<br />
You can take advantage of control groups to implement limits and resource management of your containers with {{ic|systemctl set-property}}, see {{man|5|systemd.resource-control}}. For example, you may want to limit the memory amount or CPU usage. To limit the memory consumption of your container to 2 GiB:<br />
<br />
# systemctl set-property systemd-nspawn@''container-name''.service MemoryMax=2G<br />
<br />
Or to limit the CPU time usage to roughly the equivalent of 2 cores:<br />
<br />
# systemctl set-property systemd-nspawn@''container-name''.service CPUQuota=200%<br />
<br />
This will create permanent files in {{ic|/etc/systemd/system.control/systemd-nspawn@''container-name''.service.d/}}.<br />
<br />
According to the documentation, {{ic|MemoryHigh}} is the preferred method to keep in check memory consumption, but it will not be hard-limited as is the case with {{ic|MemoryMax}}. You can use both options leaving {{ic|MemoryMax}} as the last line of defense. Also take in consideration that you will not limit the number of CPUs the container can see, but you will achieve similar results by limiting how much time the container will get at maximum, relative to the total CPU time.<br />
<br />
{{Tip|If you want these changes to be only temporary, you can pass the option {{ic|--runtime}}. You can check their results with ''systemd-cgtop''.}}<br />
<br />
=== Networking ===<br />
<br />
''systemd-nspawn'' containers can use either ''host networking'' or ''private networking'':<br />
<br />
* In the host networking mode, the container has full access to the host network. This means that the container will be able to access all network services on the host and packets coming from the container will appear to the outside network as coming from the host (i.e. sharing the same IP address).<br />
* In the private networking mode, the container is disconnected from the host's network. This makes all network interfaces unavailable to the container, with the exception of the loopback device and those explicitly assigned to the container. There is a number of different ways to set up network interfaces for the container:<br />
** an existing interface can be assigned to the container (e.g. if you have multiple Ethernet devices),<br />
** a virtual network interface associated with an existing interface (i.e. [[VLAN]] interface) can be created and assigned to the container,<br />
** a virtual Ethernet link between the host and the container can be created.<br />
: In the latter case the container's network is fully isolated (from the outside network as well as other containers) and it is up to the administrator to configure networking between the host and the containers. This typically involves creating a [[network bridge]] to connect multiple (physical or virtual) interfaces or setting up a [[Wikipedia:Network Address Translation|Network Address Translation]] between multiple interfaces.<br />
<br />
The host networking mode is suitable for ''application containers'' which do not run any networking software that would configure the interface assigned to the container. Host networking is the default mode when you run ''systemd-nspawn'' from the shell.<br />
<br />
On the other hand, the private networking mode is suitable for ''system containers'' that should be isolated from the host system. The creation of virtual Ethernet links is a very flexible tool allowing to create complex virtual networks. This is the default mode for containers started by ''machinectl'' or {{ic|systemd-nspawn@.service}}.<br />
<br />
The following subsections describe common scenarios. See {{man|1|systemd-nspawn|Networking Options}} for details about the available ''systemd-nspawn'' options.<br />
<br />
==== Use host networking ====<br />
<br />
To disable private networking and the creation of a virtual Ethernet link used by containers started with ''machinectl'', add a ''.nspawn'' file with the following option:<br />
<br />
{{hc|/etc/systemd/nspawn/''container-name''.nspawn|2=<br />
[Network]<br />
VirtualEthernet=no<br />
}}<br />
<br />
This will override the {{ic|-n}}/{{ic|--network-veth}} option used in {{ic|systemd-nspawn@.service}} and the newly started containers will use the host networking mode.<br />
<br />
==== Use a virtual Ethernet link ====<br />
<br />
If a container is started with the {{ic|-n}}/{{ic|--network-veth}} option, ''systemd-nspawn'' will create a virtual Ethernet link between the host and the container. The host side of the link will be available as a network interface named {{ic|ve-''container-name''}}. The container side of the link will be named {{ic|host0}}. Note that this option implies {{ic|--private-network}}.<br />
<br />
{{Note|<br />
* If the container name is too long, the interface name will be shortened (e.g. {{ic|ve-long-conKQGh}} instead of {{ic|ve-long-container-name}}) to fit into the [https://stackoverflow.com/a/29398765 15-characters limit]. The full name will be set as the {{ic|altname}} property of the interface (see {{man|8|ip-link}}) and can be still used to reference the interface.<br />
* When examining the interfaces with {{ic|ip link}}, interface names will be shown with a suffix, such as {{ic|ve-''container-name''@if2}} and {{ic|host0@if9}}. The {{ic|@if''N''}} is not actually part of the interface name; instead, {{ic|ip link}} appends this information to indicate which "slot" the virtual Ethernet cable connects to on the other end.<br />
: For example, a host virtual Ethernet interface shown as {{ic|ve-''foo''@if2}} is connected to the container {{ic|''foo''}}, and inside the container to the second network interface – the one shown with index 2 when running {{ic|ip link}} inside the container. Similarly, the interface named {{ic|host0@if9}} in the container is connected to the 9th network interface on the host.<br />
}}<br />
<br />
When you start the container, an IP address has to be assigned to both interfaces (on the host and in the container). If you use [[systemd-networkd]] on the host as well as in the container, this is done out-of-the-box:<br />
<br />
* the {{ic|/usr/lib/systemd/network/80-container-ve.network}} file on the host matches the {{ic|ve-''container-name''}} interface and starts a DHCP server, which assigns IP addresses to the host interface as well as the container,<br />
* the {{ic|/usr/lib/systemd/network/80-container-host0.network}} file in the container matches the {{ic|host0}} interface and starts a DHCP client, which receives an IP address from the host.<br />
<br />
If you do not use [[systemd-networkd]], you can configure static IP addresses or start a DHCP server on the host interface and a DHCP client in the container. See [[Network configuration]] for details.<br />
<br />
To give the container access to the outside network, you can configure NAT as described in [[Internet sharing#Enable NAT]]. If you use [[systemd-networkd]], this is done (partially) automatically via the {{ic|1=IPMasquerade=both}} option in {{ic|/usr/lib/systemd/network/80-container-ve.network}}. However, this issues just one [[iptables]] (or [[nftables]]) rule such as<br />
<br />
-t nat -A POSTROUTING -s 192.168.163.192/28 -j MASQUERADE<br />
<br />
The {{ic|filter}} table has to be configured manually as shown in [[Internet sharing#Enable NAT]]. You can use a wildcard to match all interfaces starting with {{ic|ve-}}:<br />
<br />
# iptables -A FORWARD -i ve-+ -o ''internet0'' -j ACCEPT<br />
<br />
{{Note|''systemd-networkd'' and ''systemd-nspawn'' can interface with [[iptables]] (using the [https://tldp.org/HOWTO/Querying-libiptc-HOWTO/whatis.html libiptc] library) as well as with [[nftables]] [https://github.com/systemd/systemd/issues/13307][https://github.com/systemd/systemd/blob/9ca34cf5a4a20d48f829b2a36824255aac29078c/NEWS#L295-L304]. In both cases IPv4 and IPv6 NAT is supported.}}<br />
<br />
Additionally, you need to open the UDP port 67 on the {{ic|ve-+}} interfaces for incoming connections to the DHCP server (operated by ''systemd-networkd''):<br />
<br />
# iptables -A INPUT -i ve-+ -p udp -m udp --dport 67 -j ACCEPT<br />
<br />
==== Use a network bridge ====<br />
<br />
If you have configured a [[network bridge]] on the host system, you can create a virtual Ethernet link for the container and add its host side to the network bridge. This is done with the {{ic|1=--network-bridge=''bridge-name''}} option. Note that {{ic|--network-bridge}} implies {{ic|--network-veth}}, i.e. the virtual Ethernet link is created automatically. However, the host side of the link will use the {{ic|vb-}} prefix instead of {{ic|ve-}}, so the [[systemd-networkd]] options for starting the DHCP server and IP masquerading will not be applied.<br />
<br />
The bridge management is left to the administrator. For example, the bridge can connect virtual interfaces with a physical interface, or it can connect only virtual interfaces of several containers. See [[systemd-networkd#Network bridge with DHCP]] and [[systemd-networkd#Network bridge with static IP addresses]] for example configurations using [[systemd-networkd]].<br />
<br />
There is also a {{ic|1=--network-zone=''zone-name''}} option which is similar to {{ic|--network-bridge}} but the network bridge is managed automatically by ''systemd-nspawn'' and ''systemd-networkd''. The bridge interface named {{ic|vz-''zone-name''}} is automatically created when the first container configured with {{ic|1=--network-zone=''zone-name''}} is started, and is automatically removed when the last container configured with {{ic|1=--network-zone=''zone-name''}} exits. Hence, this option makes it easy to place multiple related containers on a common virtual network. Note that {{ic|vz-*}} interfaces are managed by [[systemd-networkd]] same way as {{ic|ve-*}} interfaces using the options from the {{ic|/usr/lib/systemd/network/80-container-vz.network}} file.<br />
<br />
==== Use a "macvlan" or "ipvlan" interface ====<br />
<br />
Instead of creating a virtual Ethernet link (whose host side may or may not be added to a bridge), you can create a virtual interface on an existing physical interface (i.e. [[VLAN]] interface) and add it to the container. The virtual interface will be bridged with the underlying host interface and thus the container will be exposed to the outside network, which allows it to obtain a distinct IP address via DHCP from the same LAN as the host is connected to.<br />
<br />
''systemd-nspawn'' offers 2 options:<br />
<br />
* {{ic|1=--network-macvlan=''interface''}} – the virtual interface will have a different MAC address than the underlying physical {{ic|''interface''}} and will be named {{ic|mv-''interface''}}.<br />
* {{ic|1=--network-ipvlan=''interface''}} – the virtual interface will have the same MAC address as the underlying physical {{ic|''interface''}} and will be named {{ic|iv-''interface''}}.<br />
<br />
Both options imply {{ic|--private-network}}.<br />
<br />
==== Use an existing interface ====<br />
<br />
If the host system has multiple physical network interfaces, you can use the {{ic|1=--network-interface=''interface''}} to assign {{ic|''interface''}} to the container (and make it unavailable to the host while the container is started). Note that {{ic|--network-interface}} implies {{ic|--private-network}}.<br />
<br />
{{Note|Passing wireless network interfaces to ''systemd-nspawn'' containers is currently not supported. [https://github.com/systemd/systemd/issues/7873]}}<br />
<br />
=== Port mapping ===<br />
<br />
When private networking is enabled, individual ports on the host can be mapped to ports on the container using the {{ic|-p}}/{{ic|--port}} option or by using the {{ic|Port}} setting in an ''.nspawn'' file. This is done by issuing [[iptables]] rules to the {{ic|nat}} table, but the {{ic|FORWARD}} chain in the {{ic|filter}} table needs to be configured manually as shown in [[#Use a virtual Ethernet link]].<br />
<br />
For example, to map a TCP port 8000 on the host to the TCP port 80 in the container:<br />
<br />
{{hc|/etc/systemd/nspawn/''container-name''.nspawn|2=<br />
[Network]<br />
Port=tcp:8000:80<br />
}}<br />
<br />
{{Note|''systemd-nspawn'' explicitly excludes the {{ic|loopback}} interface when mapping ports. Hence, for the example above, {{ic|localhost:8000}} connects to the host and not to the container. Only connections to other interfaces are subjected to port mapping. See [https://github.com/systemd/systemd/issues/6106] for details.}}<br />
<br />
=== Domain name resolution ===<br />
<br />
[[Domain name resolution]] in the container can be configured the same way as on the host system. Additionally, ''systemd-nspawn'' provides options to manage the {{ic|/etc/resolv.conf}} file inside the container:<br />
<br />
* {{ic|--resolv-conf}} can be used on command-line<br />
* {{ic|1=ResolvConf=}} can be used in ''.nspawn'' files<br />
<br />
These corresponding options have many possible values which are described in {{man|1|systemd-nspawn|Integration Options}}. The default value is {{ic|auto}}, which means that:<br />
<br />
* If {{ic|--private-network}} is enabled, the {{ic|/etc/resolv.conf}} is left as it is in the container.<br />
* Otherwise, if [[systemd-resolved]] is running on the host, its stub {{ic|resolv.conf}} file is copied or bind-mounted into the container.<br />
* Otherwise, the {{ic|/etc/resolv.conf}} file is copied or bind-mounted from the host to the container.<br />
<br />
In the last two cases, the file is copied, if the container root is writeable, and bind-mounted if it is read-only.<br />
<br />
For the second case where [[systemd-resolved]] runs on the host, ''systemd-nspawn'' expects it to also run in the container, so that the container can use the stub symlink file {{ic|/etc/resolv.conf}} from the host. If not, the default value {{ic|auto}} no longer works, and you should replace the symlink by using one of the {{ic|replace-*}} options.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Running non-shell/init commands ===<br />
<br />
From {{man|1|systemd-nspawn|Execution_Options}}:<br />
:"''[The option] {{ic|--as-pid2}} [invokes] the shell or specified program as process ID (PID) 2 instead of PID 1 (init). [...] It is recommended to use this mode to invoke arbitrary commands in containers, unless they have been modified to run correctly as PID 1. '''Or in other words: this switch should be used for pretty much all commands''', except when the command refers to an init or shell implementation."''</q><br />
<br />
=== Unprivileged containers ===<br />
<br />
''systemd-nspawn'' supports unprivileged containers, though the containers need to be booted as root.<br />
<br />
{{Style|Very little of [[Linux Containers#Enable support to run unprivileged containers (optional)]] applies to systemd-nspawn.}}<br />
<br />
{{Note|This feature requires {{man|7|user_namespaces}}, for further info see [[Linux Containers#Enable support to run unprivileged containers (optional)]].}}<br />
<br />
The easiest way to do this is to let ''systemd-nspawn'' automatically choose an unused range of UIDs/GIDs by using the {{ic|-U}} option:<br />
<br />
# systemd-nspawn -bUD ~/MyContainer<br />
<br />
If kernel supports user namespaces, the {{ic|-U}} option is equivalent to {{ic|1=--private-users=pick --private-users-ownership=auto}}. See {{man|1|systemd-nspawn|User Namespacing Options}} for details.<br />
<br />
{{Note|You can also specify the UID/GID range of the container manually, however, this is rarely useful.}}<br />
<br />
If a container has been started with a private UID/GID range using the {{ic|1=--private-users-ownership=chown}} option (or on a filesystem where {{ic|-U}} requires {{ic|1=--private-users-ownership=chown}}), you need to keep using it that way to avoid permission errors. Alternatively, it is possible to undo the effect of {{ic|1=--private-users-ownership=chown}} on the container's file system by specifying a range of IDs starting at 0:<br />
<br />
# systemd-nspawn -D ~/MyContainer --private-users=0 --private-users-ownership=chown<br />
<br />
=== Use an X environment ===<br />
<br />
{{Accuracy|The note about the systemd version at the end of this section seems to be obsolete. For me (systemd version 239) X applications also work if {{ic|/tmp/.X11-unix}} is bound rw.|section=/tmp/.X11-unix contents have to be bind-mounted as read-only - still relevant?}}<br />
<br />
See [[Xhost]] and [[Change root#Run graphical applications from chroot]].<br />
<br />
You will need to set the {{ic|DISPLAY}} environment variable inside your container session to connect to the external X server.<br />
<br />
X stores some required files in the {{ic|/tmp}} directory. In order for your container to display anything, it needs access to those files. To do so, append the {{ic|--bind-ro<nowiki>=</nowiki>/tmp/.X11-unix}} option when starting the container.<br />
<br />
{{Note|Since systemd version 235, {{ic|/tmp/.X11-unix}} contents [https://github.com/systemd/systemd/issues/7093 have to be bind-mounted as read-only], otherwise they will disappear from the filesystem. The read-only mount flag does not prevent using {{ic|connect()}} syscall on the socket. If you binded also {{ic|/run/user/1000}} then you might want to explicitly bind {{ic|/run/user/1000/bus}} as read-only to protect the dbus socket from being deleted. }}<br />
<br />
==== Avoiding xhost ====<br />
<br />
{{ic|xhost}} only provides rather coarse access rights to the X server. More fine-grained access control is possible via the {{ic|$XAUTHORITY}} file. Unfortunately, just making the {{ic|$XAUTHORITY}} file accessible in the container will not do the job:<br />
your {{ic|$XAUTHORITY}} file is specific to your host, but the container is a different host.<br />
The following trick adapted from [https://stackoverflow.com/a/25280523 stackoverflow] can be used to make your X server accept the {{ic|$XAUTHORITY}} file from an X application run inside the container:<br />
<br />
$ XAUTH=/tmp/container_xauth<br />
$ xauth nextract - "$DISPLAY" | sed -e 's/^..../ffff/' | xauth -f "$XAUTH" nmerge -<br />
# systemd-nspawn -D myContainer --bind=/tmp/.X11-unix --bind="$XAUTH" -E DISPLAY="$DISPLAY" -E XAUTHORITY="$XAUTH" --as-pid2 /usr/bin/xeyes<br />
<br />
The second line above sets the connection family to "FamilyWild", value {{ic|65535}}, which causes the entry to match every display. See {{man|7|Xsecurity}} for more information.<br />
<br />
==== Using X nesting/Xephyr ====<br />
<br />
Another simple way to run X applications and avoid the risks of a shared X desktop is using X nesting.<br />
The advantages here are avoiding interaction between in-container applications and non-container applications entirely and being able to run a different [[desktop environment]] or [[window manager]], the downsides are less performance and the lack of hardware acceleration when using [[Xephyr]].<br />
<br />
Start Xephyr outside of the container using:<br />
<br />
# Xephyr :1 -resizeable<br />
<br />
Then start the container with the following options:<br />
<br />
--setenv=DISPLAY=:1 --bind-ro<nowiki>=</nowiki>/tmp/.X11-unix/X1<br />
<br />
No other binds are necessary.<br />
<br />
You might still need to manually set {{ic|1=DISPLAY=:1}} in the container under some circumstances (mostly if used with {{ic|-b}}).<br />
<br />
==== Run Firefox ====<br />
<br />
# systemd-nspawn --setenv=DISPLAY=:0 \<br />
--setenv=XAUTHORITY=~/.Xauthority \<br />
--bind-ro=$HOME/.Xauthority:/root/.Xauthority \<br />
--bind=/tmp/.X11-unix \<br />
-D ~/containers/firefox \<br />
--as-pid2 \<br />
firefox<br />
<br />
{{Note|As such, firefox is run as the root user which comes with its own risks if not using [[#Unprivileged containers]]. In that case, you may first opt to [[Users_and_groups#Example_adding_a_user|add a user]] inside the container, and then add the {{ic|--user <username>}} option in ''systemd-nspawn'' invocation.}}<br />
<br />
Alternatively you can boot the container and let e.g. [[systemd-networkd]] set up the virtual network interface:<br />
<br />
# systemd-nspawn --bind-ro=$HOME/.Xauthority:/root/.Xauthority \<br />
--bind=/tmp/.X11-unix \<br />
-D ~/containers/firefox \<br />
--network-veth -b<br />
<br />
Once your container is booted, run the Xorg binary like so:<br />
<br />
# systemd-run -M firefox --setenv=DISPLAY=:0 firefox<br />
<br />
==== 3D graphics acceleration ====<br />
<br />
To enable accelerated 3D graphics, it may be necessary to bind mount {{ic|/dev/dri}} to the container by adding the following line to the ''.nspawn'' file:<br />
<br />
Bind=/dev/dri<br />
<br />
The above trick was adopted from [https://web.archive.org/web/20190925003151/https://patrickskiba.com/sysytemd-nspawn/2019/03/21/graphical-applications-in-systemd-nspawn.html patrickskiba.com]. This notably solves the problem of<br />
<br />
libGL error: MESA-LOADER: failed to retrieve device information<br />
libGL error: Version 4 or later of flush extension not found<br />
libGL error: failed to load driver: i915<br />
<br />
You can confirm that the it has been enabled by running {{ic|glxinfo}} or {{ic|glxgears}}.<br />
<br />
=== Access host filesystem ===<br />
<br />
See {{ic|--bind}} and {{ic|--bind-ro}} in {{man|1|systemd-nspawn}}.<br />
<br />
If both the host and the container are Arch Linux, then one could, for example, share the pacman cache:<br />
<br />
# systemd-nspawn --bind=/var/cache/pacman/pkg<br />
<br />
Or you can specify per-container bind using the file:<br />
<br />
{{hc|/etc/systemd/nspawn/''my-container''.nspawn|<nowiki><br />
[Files]<br />
Bind=/var/cache/pacman/pkg<br />
</nowiki>}}<br />
<br />
See [[#Per-container settings]].<br />
<br />
To bind the directory to a different path within the container, add the path be separated by a colon. For example:<br />
<br />
# systemd-nspawn --bind=''/path/to/host_dir:/path/to/container_dir''<br />
<br />
=== Run on a non-systemd system ===<br />
<br />
See [[Init#systemd-nspawn]].<br />
<br />
=== Use Btrfs subvolume as container root ===<br />
<br />
To use a [[Btrfs#Subvolumes|Btrfs subvolume]] as a template for the container's root, use the {{ic|--template}} flag. This takes a snapshot of the subvolume and populates the root directory for the container with it.<br />
<br />
{{Note|If the template path specified is not the root of a subvolume, the '''entire''' tree is copied. This will be very time consuming.}}<br />
<br />
For example, to use a snapshot located at {{ic|/.snapshots/403/snapshot}}:<br />
<br />
# systemd-nspawn --template=/.snapshots/403/snapshots -b -D ''my-container''<br />
<br />
where {{ic|''my-container''}} is the name of the directory that will be created for the container. After powering off, the newly created subvolume is retained.<br />
<br />
=== Use temporary Btrfs snapshot of container ===<br />
<br />
One can use the {{ic|--ephemeral}} or {{ic|-x}} flag to create a temporary btrfs snapshot of the container and use it as the container root. Any changes made while booted in the container will be lost. For example:<br />
<br />
# systemd-nspawn -D ''my-container'' -xb<br />
<br />
where ''my-container'' is the directory of an '''existing''' container or system. For example, if {{ic|/}} is a btrfs subvolume one could create an ephemeral container of the currently running host system by doing:<br />
<br />
# systemd-nspawn -D / -xb <br />
<br />
After powering off the container, the btrfs subvolume that was created is immediately removed.<br />
<br />
=== Run docker in systemd-nspawn ===<br />
<br />
Since [[Docker]] 20.10, it is possible to run Docker containers inside an unprivileged ''systemd-nspawn'' container with ''cgroups v2'' enabled (default in Arch Linux) without undermining security measures by disabling cgroups and user namespaces. To do so, edit {{ic|/etc/systemd/nspawn/myContainer.nspawn}} (create if absent) and add the following configurations.<br />
<br />
{{hc|/etc/systemd/nspawn/myContainer.nspawn|<nowiki><br />
[Exec]<br />
SystemCallFilter=add_key keyctl<br />
</nowiki>}}<br />
<br />
Then, Docker should work as-is inside the container.<br />
<br />
{{Note|The configuration above exposes the system calls ''add_key'' and ''keyctl'' to the container, which are not namespaced. This could still be a security risk, even though it is much lower than disabling user namespacing entirely like what one had to do before cgroups v2.}}<br />
<br />
Since ''overlayfs'' does not work with user namespaces and is unavailable inside ''systemd-nspawn'', by default, Docker falls back to using the inefficient ''vfs'' as its storage driver, which creates a copy of the image each time a container is started. This can be worked around by using ''fuse-overlayfs'' as its storage driver. To do so, we need to first expose ''fuse'' to the container:<br />
<br />
{{hc|/etc/systemd/nspawn/myContainer.nspawn|<nowiki><br />
[Files]<br />
Bind=/dev/fuse<br />
</nowiki>}}<br />
<br />
and then allow the container to read and write the device node:<br />
<br />
# systemctl set-property systemd-nspawn@myContainer DeviceAllow='/dev/fuse rwm'<br />
<br />
Finally, install the package {{Pkg|fuse-overlayfs}} inside the container. You need to restart the container for all the configuration to take effect.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Root login fails ===<br />
<br />
If you get the following error when you try to login (i.e. using {{ic|machinectl login <name>}}):<br />
<br />
arch-nspawn login: root<br />
Login incorrect<br />
<br />
And the [[journal]] shows:<br />
<br />
pam_securetty(login:auth): access denied: tty 'pts/0' is not secure !<br />
<br />
{{Accuracy|Files in {{ic|/usr/lib}} should not be edited by users, the change in {{ic|/usr/lib/tmpfiles.d/arch.conf}} will be lost when {{pkg|filesystem}} is upgraded.}}<br />
<br />
It is possible to either delete {{ic|/etc/securetty}}[https://unix.stackexchange.com/questions/41840/effect-of-entries-in-etc-securetty/41939#41939] and {{ic|/usr/share/factory/etc/securetty}} on the '''container''' file system, or simply add the desired pty terminal devices (like {{ic|pts/0}}), as necessary, to {{ic|/etc/securetty}} on the '''container''' file system. Any changes will be overridden on the next boot, therefore it is necessary to also remove the {{ic|/etc/securetty}} entry from {{ic|/usr/lib/tmpfiles.d/arch.conf}} on the '''container''' file system, see {{Bug|63236}}. If you opt for deletion, you might also optionally blacklist the files ([[pacman#Skip files from being installed to system|NoExtract]]) in {{ic|/etc/pacman.conf}} to prevent them from getting reinstalled. See {{Bug|45903}} for details.<br />
<br />
=== execv(...) failed: Permission denied ===<br />
<br />
When trying to boot the container via {{ic|systemd-nspawn -bD ''/path/to/container''}} (or executing something in the container), and the following error comes up:<br />
<br />
execv(/usr/lib/systemd/systemd, /lib/systemd/systemd, /sbin/init) failed: Permission denied<br />
<br />
even though the permissions of the files in question (i.e. {{ic|/lib/systemd/systemd}}) are correct, this can be the result of having mounted the file system on which the container is stored as non-root user. For example, if you mount your disk manually with an entry in [[fstab]] that has the options {{ic|noauto,user,...}}, ''systemd-nspawn'' will not allow executing the files even if they are owned by root.<br />
<br />
=== Terminal type in TERM is incorrect (broken colors) ===<br />
<br />
When logging into the container via {{ic|machinectl login}}, the colors and keystrokes in the terminal within the container might be broken. This may be due to an incorrect terminal type in {{ic|TERM}} environment variable. The environment variable is not inherited from the shell on the host, but falls back to a default fixed in systemd ({{ic|vt220}}), unless explicitly configured. To configure, within the container create a configuration overlay for the {{ic|container-getty@.service}} systemd service that launches the login getty for {{ic|machinectl login}}, and set {{ic|TERM}} to the value that matches the host terminal you are logging in from:<br />
<br />
{{hc|/etc/systemd/system/container-getty@.service.d/term.conf|2=<br />
[Service]<br />
Environment=TERM=xterm-256color<br />
}}<br />
<br />
Alternatively use {{ic|machinectl shell}}. It properly inherits the {{ic|TERM}} environment variable from the terminal.<br />
<br />
=== Mounting a NFS share inside the container ===<br />
<br />
Not possible at this time (June 2019).<br />
<br />
== See also ==<br />
<br />
* [[Getty#Nspawn_console|Automatic console login]]<br />
* [https://lwn.net/Articles/572957/ Creating containers with systemd-nspawn]<br />
* [https://www.youtube.com/results?search_query=systemd-nspawn&aq=f Presentation by Lennart Poettering on systemd-nspawn]<br />
* [https://dabase.com/e/12009/ Running Firefox in a systemd-nspawn container]<br />
* [https://web.archive.org/web/20190925003151/https://patrickskiba.com/sysytemd-nspawn/2019/03/21/graphical-applications-in-systemd-nspawn.html Graphical applications in systemd-nspawn]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Systemd-networkd&diff=718395Systemd-networkd2022-02-14T15:31:46Z<p>Cmsigler: Correct IPMasquerade= setting shown in this page to comply with current systemd.network configuration options; "true" is now deprecated, please see {{man|5|systemd.network}}</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Network managers]]<br />
[[Category:Virtualization]]<br />
[[de:Systemd/systemd-networkd]]<br />
[[es:Systemd-networkd]]<br />
[[fr:Systemd-networkd]]<br />
[[ja:systemd-networkd]]<br />
[[ru:Systemd-networkd]]<br />
[[zh-hans:Systemd-networkd]]<br />
{{Related articles start}}<br />
{{Related|systemd}}<br />
{{Related|systemd-resolved}}<br />
{{Related|systemd-nspawn}}<br />
{{Related|Network bridge}}<br />
{{Related|Network configuration}}<br />
{{Related|Wireless network configuration}}<br />
{{Related|:Category:Network configuration}}<br />
{{Related articles end}}<br />
<br />
''systemd-networkd'' is a system daemon that manages network configurations. It detects and configures network devices as they appear; it can also create virtual network devices. This service can be especially useful to set up complex network configurations for a container managed by [[systemd-nspawn]] or for virtual machines. It also works fine on simple connections.<br />
<br />
== Basic usage ==<br />
<br />
The {{Pkg|systemd}} package is part of the default Arch installation and contains all needed files to operate a wired network. Wireless adapters, covered later in this article, can be set up by services, such as [[wpa_supplicant]] or [[iwd]].<br />
<br />
=== Required services and setup ===<br />
<br />
To use ''systemd-networkd'', [[start/enable]] {{ic|systemd-networkd.service}}.<br />
<br />
{{Note|You must ensure that no other service that wants to configure the network is running; in fact, multiple networking services will conflict. You can find a list of the currently running services with {{ic|1=systemctl --type=service}} and then [[stop]] them.}}<br />
<br />
It is optional to also configure [[systemd-resolved]], which is a network name resolution service to local applications, considering the following points:<br />
<br />
* It is important to understand how [[resolv.conf]] and ''systemd-resolved'' interact to properly configure the DNS that will be used, some explanations are provided in [[systemd-resolved]].<br />
* ''systemd-resolved'' is required if DNS entries are specified in ''.network'' files.<br />
* ''systemd-resolved'' is also required if you want to obtain DNS addresses from DHCP servers or IPv6 router advertisements.<br>(by setting ({{ic|1=DHCP=}} and/or {{ic|1=IPv6AcceptRA=}} in the {{ic|[Network]}} section, and {{ic|1=UseDNS=yes}} (the default) in the corresponding section(s) {{ic|[DHCPv4]}}, {{ic|[DHCPv6]}}, {{ic|[IPv6AcceptRA]}}, see {{man|5|systemd.network}}).<br />
* Note that ''systemd-resolved'' can also be used without ''systemd-networkd''.<br />
<br />
=== systemd-networkd-wait-online ===<br />
<br />
Enabling {{ic|systemd-networkd.service}} also enables {{ic|systemd-networkd-wait-online.service}}, which is a oneshot system service that waits for the network to be configured. The latter has {{ic|1=WantedBy=network-online.target}}, so it will be started only when {{ic|network-online.target}} itself is enabled or pulled in by some other unit. See also [[systemd#Running services after the network is up]].<br />
<br />
By default, {{ic|systemd-networkd-wait-online.service}} waits for all links it is aware of and which are managed by ''systemd-networkd'' to be fully configured or failed, and for at least one link to be online.<br />
<br />
If your system has multiple network interfaces, but some are not expected to be connected all the time (e.g. if you have a dual-port Ethernet card, but only one cable plugged in), starting {{ic|systemd-networkd-wait-online.service}} will fail after the default timeout of 2 minutes. This may cause an unwanted delay in the startup process. To change the behaviour to wait for ''any'' interface rather than ''all'' interfaces to become online, [[edit]] the service and add the {{ic|--any}} parameter to the {{ic|ExecStart}} line:<br />
<br />
{{hc|/etc/systemd/system/systemd-networkd-wait-online.service.d/wait-for-only-one-interface.conf|2=<br />
[Service]<br />
ExecStart=<br />
ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --any<br />
}}<br />
<br />
Other behaviour such as which specific interface(s) to wait for or the operational state can be configured as well. See {{man|8|systemd-networkd-wait-online}} for the available parameters.<br />
<br />
=== Configuration examples ===<br />
<br />
All configurations in this section are stored as {{ic|foo.network}} in {{ic|/etc/systemd/network/}}. For a full listing of options and processing order, see [[#Configuration files]] and {{man|5|systemd.network}}.<br />
<br />
systemd/udev automatically assigns predictable, stable network interface names for all local Ethernet, WLAN, and WWAN interfaces. Use {{ic|networkctl list}} to list the devices on the system.<br />
<br />
After making changes to a configuration file, [[restart]] {{ic|systemd-networkd.service}}.<br />
<br />
{{Note|<br />
* The options specified in the configuration files are case sensitive.<br />
* In the examples below, {{ic|enp1s0}} is the wired adapter and {{ic|wlp2s0}} is the wireless adapter. These names can be different on different systems. See [[Network configuration#Network interfaces]] for checking your adapter names.<br />
* It is also possible to use a wildcard, e.g. {{ic|1=Name=en*}} or {{ic|1=Name=wl*}}.<br />
* Devices can also be matched by their type. E.g. {{ic|1=Type=ether}} for Ethernet, {{ic|1=Type=wlan}} for Wi-Fi and {{ic|1=Type=wwan}} for WWAN. Note that {{ic|1=Type=ether}} will also match virtual Ethernet interfaces ({{ic|veth*}}), which may be undesirable.<br />
* If you want to disable IPv6, see [[IPv6#systemd-networkd_3|IPv6#systemd-networkd]].<br />
}}<br />
<br />
==== Wired adapter using DHCP ====<br />
<br />
{{hc|/etc/systemd/network/20-wired.network|2=<br />
[Match]<br />
Name=enp1s0<br />
<br />
[Network]<br />
DHCP=yes<br />
}}<br />
<br />
==== Wired adapter using a static IP ====<br />
<br />
{{hc|/etc/systemd/network/20-wired.network|2=<br />
[Match]<br />
Name=enp1s0<br />
<br />
[Network]<br />
Address=10.1.10.9/24<br />
Gateway=10.1.10.1<br />
DNS=10.1.10.1<br />
}}<br />
<br />
{{ic|1=Address=}} can be used more than once to configure multiple IPv4 or IPv6 addresses. See [[#network files]] or {{man|5|systemd.network}} for more options.<br />
<br />
==== Wireless adapter ====<br />
<br />
In order to connect to a wireless network with ''systemd-networkd'', a wireless adapter configured with another application such as [[wpa_supplicant]] or [[iwd]] is required.<br />
<br />
{{hc|/etc/systemd/network/25-wireless.network|2=<br />
[Match]<br />
Name=wlp2s0<br />
<br />
[Network]<br />
DHCP=yes<br />
IgnoreCarrierLoss=3s<br />
}}<br />
<br />
If the wireless adapter has a static IP address, the configuration is the same (except for the interface name) as in a [[#Wired adapter using a static IP|wired adapter]].<br />
<br />
{{Tip|{{ic|1=IgnoreCarrierLoss=3s}} ensures that ''systemd-networkd'' will not re-configure the interface (e.g., release and re-acquire a DHCP lease) for a short period (3 seconds in this example) while the wireless interface roams to another access point within the same wireless network (SSID), which translates to shorter downtime when roaming.}}<br />
<br />
To authenticate to the wireless network, use e.g. [[wpa_supplicant]] or [[iwd]].<br />
<br />
==== Wired and wireless adapters on the same machine ====<br />
<br />
This setup will enable a DHCP IP for both a wired and wireless connection making use of the metric directive to allow the kernel to decide on-the-fly which one to use. This way, no connection downtime is observed when the wired connection is unplugged.<br />
<br />
The kernel's route metric (same as configured with ''ip'') decides which route to use for outgoing packets, in cases when several match. This will be the case when both wireless and wired devices on the system have active connections. To break the tie, the kernel uses the metric. If one of the connections is terminated, the other automatically wins without there being a gap with nothing configured (ongoing transfers may still not deal with this nicely but that is at a different OSI layer).<br />
<br />
{{Note|The {{ic|Metric}} option is for static routes while the {{ic|RouteMetric}} option is for setups not using static routes. See {{man|5|systemd.network}} for more details.}}<br />
<br />
{{hc|/etc/systemd/network/20-wired.network|2=<br />
[Match]<br />
Name=enp1s0<br />
<br />
[Network]<br />
DHCP=yes<br />
<br />
[DHCPv4]<br />
RouteMetric=10<br />
}}<br />
<br />
{{hc|/etc/systemd/network/25-wireless.network|2=<br />
[Match]<br />
Name=wlp2s0<br />
<br />
[Network]<br />
DHCP=yes<br />
<br />
[DHCPv4]<br />
RouteMetric=20<br />
}}<br />
<br />
If using IPv6, you will need to separately set the metric for the IPv6 routes too, such as:<br />
<br />
{{hc|/etc/systemd/network/20-wired.network|2=<br />
...<br />
<br />
[IPv6AcceptRA]<br />
RouteMetric=10<br />
}}<br />
<br />
{{hc|/etc/systemd/network/25-wireless.network|2=<br />
...<br />
<br />
[IPv6AcceptRA]<br />
RouteMetric=20<br />
}}<br />
<br />
==== Renaming an interface ====<br />
<br />
Instead of [[Network configuration#Change interface name|editing udev rules]], a ''.link'' file can be used to rename an interface. A useful example is to set a predictable interface name for a USB-to-Ethernet adapter based on its MAC address, as those adapters are usually given different names depending on which USB port they are plugged into.<br />
<br />
{{hc|head=/etc/systemd/network/10-ethusb0.link|output=<br />
[Match]<br />
MACAddress=12:34:56:78:90:ab<br />
<br />
[Link]<br />
Description=USB to Ethernet Adapter<br />
Name=ethusb0<br />
}}<br />
<br />
{{Note|Any user-supplied ''.link'' '''must''' have a lexically earlier file name than the default config {{ic|99-default.link}} in order to be considered at all. For example, name the file {{ic|10-ethusb0.link}} and not {{ic|ethusb0.link}}.}}<br />
<br />
== Configuration files ==<br />
<br />
Configuration files are located in {{ic|/usr/lib/systemd/network/}}, the volatile runtime network directory {{ic|/run/systemd/network/}} and the local administration network directory {{ic|/etc/systemd/network/}}. Files in {{ic|/etc/systemd/network/}} have the highest priority.<br />
<br />
There are three types of configuration files. They all use a format similar to [[systemd#Writing unit files|systemd unit files]]. <br />
<br />
* ''.network'' files. They will apply a network configuration for a ''matching'' device<br />
* ''.netdev'' files. They will create a ''virtual network device'' for a ''matching'' environment<br />
* ''.link'' files. When a network device appears, [[udev]] will look for the first ''matching'' ''.link'' file<br />
<br />
They all follow the same rules: <br />
<br />
* If '''all''' conditions in the {{ic|[Match]}} section are matched, the profile will be activated<br />
* an empty {{ic|[Match]}} section means the profile will apply in any case (can be compared to the {{ic|*}} wildcard)<br />
* all configuration files are collectively sorted and processed in lexical order, regardless of the directory in which they live<br />
* files with identical name replace each other<br />
<br />
{{Tip|<br />
* Files in {{ic|/etc/systemd/network/}} override the corresponding system-supplied file in {{ic|/usr/lib/systemd/network/}}. You can also create a symlink to {{ic|/dev/null}} to "mask" a system file.<br />
* systemd accepts the values {{ic|1}}, {{ic|true}}, {{ic|yes}}, {{ic|on}} for a true boolean, and the values {{ic|0}}, {{ic|false}}, {{ic|no}}, {{ic|off}} for a false boolean. See {{man|7|systemd.syntax}}.<br />
}}<br />
<br />
=== network files ===<br />
<br />
{{Remove|Duplicates the {{man|5|systemd.network}} man page.}}<br />
<br />
These files are aimed at setting network configuration variables, especially for servers and containers.<br />
<br />
''.network'' files have the following sections: {{ic|[Match]}}, {{ic|[Link]}}, {{ic|[Network]}}, {{ic|[Address]}}, {{ic|[Route]}}, and {{ic|[DHCPv4]}}. Below are commonly configured keys for each section. See {{man|5|systemd.network}} for more information and examples.<br />
<br />
==== [Match] ====<br />
<br />
{| class = "wikitable"<br />
! Parameter !! Description !! Accepted Values !! Default Value<br />
|-<br />
| {{ic|1=Name=}} || Match device names, e.g. {{ic|en*}}. By prefixing with {{ic|!}}, the list can be inverted. || white-space separated device names with globs, logical negation ({{ic|!}}) ||<br />
|-<br />
| {{ic|1=MACAddress=}} || Match MAC addresses, e.g. {{ic|1=MACAddress=01:23:45:67:89:ab 00-11-22-33-44-55 AABB.CCDD.EEFF}} || whitespace-separated MAC addresses in full colon-, hyphen- or dot-delimited hexadecimal || <br />
|-<br />
| {{ic|1=Host=}} || Match the hostname or machine ID of the host. || hostname string with globs, {{man|5|machine-id}} ||<br />
|-<br />
| {{ic|1=Virtualization=}} || Check whether the system is executed in a virtualized environment. {{ic|1=Virtualization=false}} will only match your host machine, while {{ic|1=Virtualization=true}} matches any container or VM. It is possible to check for a specific virtualization type or implementation, or for a user namespace (with {{ic|private-users}}). || boolean, logical negation ({{ic|!}}), type ({{ic|vm}}, {{ic|container}}), implementation (see {{man|1|systemd-detect-virt}}), {{ic|private-users}} ||<br />
|}<br />
<br />
==== [Link] ====<br />
<br />
{| class = "wikitable"<br />
! Parameter !! Description !! Accepted Values !! Default Value<br />
|-<br />
| {{ic|1=MACAddress=}} || Assign a hardware address to the device. Useful for [[MAC_address_spoofing#systemd-networkd|MAC address spoofing]]. || full colon-, hyphen- or dot-delimited hexadecimal MAC addresses ||<br />
|-<br />
| {{ic|1=MTUBytes=}} || Maximum transmission unit in bytes to set for the device. Note that if IPv6 is enabled on the interface, and the MTU is chosen below 1280 (the minimum MTU for IPv6) it will automatically be increased to this value. Setting a larger MTU value (e.g. when using [[jumbo frames]]) can significantly speed up your network transfers || integer (usual suffixes K, M, G, are supported and are understood to the base of 1024) ||<br />
|-<br />
| {{ic|1=Multicast=}} || allows the usage of [[wikipedia:Multicast_address|multicast]] || boolean || ? not documented ?<br />
|}<br />
<br />
==== [Network] ====<br />
<br />
{| class = "wikitable"<br />
! Parameter !! Description !! Accepted Values !! Default Value<br />
|-<br />
| {{ic|1=DHCP=}} || Controls DHCPv4 and/or DHCPv6 client support. || boolean, {{ic|ipv4}}, {{ic|ipv6}} || {{ic|false}}<br />
|-<br />
| {{ic|1=DHCPServer=}} || If enabled, a DHCPv4 server will be started. || boolean || {{ic|false}}<br />
|-<br />
| {{ic|1=MulticastDNS=}} || Enables [[RFC:6762|multicast DNS]] support. When set to {{ic|resolve}}, only resolution is enabled, but not host or service registration and announcement. || boolean, {{ic|resolve}} || {{ic|false}}<br />
|-<br />
| {{ic|1=DNSSEC=}} || Controls DNSSEC DNS validation support on the link. When set to {{ic|allow-downgrade}}, compatibility with non-DNSSEC capable networks is increased, by automatically turning off DNSSEC in this case. || boolean, {{ic|allow-downgrade}} || {{ic|false}}<br />
|-<br />
| {{ic|1=DNS=}} || Configure static [[DNS]] addresses. May be specified more than once. || {{man|3|inet_pton}} ||<br />
|-<br />
| {{ic|1=Domains=}} || A list of domains which should be resolved using the DNS servers on this link. [https://www.freedesktop.org/software/systemd/man/systemd.network.html#Domains= more information] || domain name, optionally prefixed with a tilde ({{ic|~}}) ||<br />
|-<br />
| {{ic|1=IPForward=}} || If enabled, incoming packets on any network interface will be forwarded to any other interfaces according to the routing table. See [[Internet sharing#Enable packet forwarding]] for details. || boolean, {{ic|ipv4}}, {{ic|ipv6}} || {{ic|false}}<br />
|-<br />
| {{ic|1=IPMasquerade=}} || If enabled, packets forwarded from the network interface will appear as coming from the local host. Depending on the value, implies {{ic|1=IPForward=ipv4}}, {{ic|1=IPForward=ipv6}} or {{ic|1=IPForward=yes}}. || {{ic|ipv4}}, {{ic|ipv6}}, {{ic|both}}, {{ic|no}} || {{ic|no}}<br />
|-<br />
| {{ic|1=IPv6PrivacyExtensions=}} || Configures use of stateless temporary addresses that change over time (see [[RFC:4941|RFC 4941]]). When {{ic|prefer-public}}, enables the privacy extensions, but prefers public addresses over temporary addresses. When {{ic|kernel}}, the kernel's default setting will be left in place. || boolean, {{ic|prefer-public}}, {{ic|kernel}} || {{ic|false}}<br />
|}<br />
<br />
==== [Address] ====<br />
<br />
{| class = "wikitable"<br />
! Parameter !! Description !! Accepted Values !! Default Value<br />
|-<br />
| {{ic|1=Address=}} || Specify this key more than once to configure several addresses. Mandatory unless DHCP is used. If the specified address is {{ic|0.0.0.0}} (for IPv4) or {{ic|::}} (for IPv6), a new address range of the requested size is automatically allocated from a system-wide pool of unused ranges. || static IPv4 or IPv6 address and its prefix length (see {{man|3|inet_pton}}) ||<br />
|}<br />
<br />
==== [Route] ====<br />
<br />
* {{ic|1=Gateway=}} this option is '''mandatory''' unless DHCP is used<br />
* {{ic|1=Destination=}} the destination prefix of the route, possibly followed by a slash and the prefix length <br />
<br />
If {{ic|Destination}} is not present in {{ic|[Route]}} section this section is treated as a default route.<br />
<br />
{{Tip|You can put the {{ic|1=Address=}} and {{ic|1=Gateway=}} keys in the {{ic|[Network]}} section as a short-hand if {{ic|[Address]}} section contains only an {{ic|Address}} key and {{ic|[Route]}} section contains only a {{ic|Gateway}} key.}}<br />
<br />
==== [DHCPv4] ====<br />
<br />
{| class = "wikitable<br />
! Parameter !! Description !! Accepted Values !! Default Value<br />
|-<br />
| {{ic|1=UseDNS=}} || controls whether the DNS servers advertised by the DHCP server are used || boolean || {{ic|true}}<br />
|-<br />
| {{ic|1=Anonymize=}} || when true, the options sent to the DHCP server will follow the [[RFC:7844]] (Anonymity Profiles for DHCP Clients) to minimize disclosure of identifying information || boolean || {{ic|false}}<br />
|-<br />
| {{ic|1=UseDomains=}} || controls whether the domain name received from the DHCP server will be used as DNS search domain. If set to {{ic|route}}, the domain name received from the DHCP server will be used for routing DNS queries only, but not for searching. This option can sometimes fix local name resolving when using [[systemd-resolved]] || boolean, {{ic|route}} || {{ic|false}}<br />
|}<br />
<br />
==== [DHCPServer] ====<br />
<br />
This is an example of a DHCP server configuration which works well with [[hostapd]] to create a wireless hotspot. {{ic|IPMasquerade}} adds the firewall rules for [[Internet sharing#Enable NAT|NAT]] and implies {{ic|1=IPForward=ipv4}} to enable [[Internet sharing#Enable packet forwarding|packet forwarding]].<br />
<br />
{{Accuracy|{{ic|1=IPMasquerade=ipv4}} does not add the rules for the {{ic|filter}} table, they have to be added manually. See [[systemd-nspawn#Use a virtual Ethernet link]].}}<br />
<br />
{{hc|/etc/systemd/network/''wlan0''.network|<nowiki><br />
[Match]<br />
Name=wlan0<br />
<br />
[Network]<br />
Address=10.1.1.1/24<br />
DHCPServer=true<br />
IPMasquerade=ipv4<br />
<br />
[DHCPServer]<br />
PoolOffset=100<br />
PoolSize=20<br />
EmitDNS=yes<br />
DNS=9.9.9.9<br />
</nowiki>}}<br />
<br />
=== netdev files ===<br />
<br />
{{Remove|Duplicates the {{man|5|systemd.netdev}} man page.}}<br />
<br />
These files will create virtual network devices. They have two sections: {{ic|[Match]}} and {{ic|[NetDev]}}. Below are commonly configured keys for each section. See {{man|5|systemd.netdev}} for more information and examples.<br />
<br />
==== [Match] section ====<br />
<br />
* {{ic|1=Host=}} the hostname<br />
* {{ic|1=Virtualization=}} check if the system is running in a virtualized environment<br />
<br />
==== [NetDev] section ====<br />
<br />
Most common keys are:<br />
<br />
* {{ic|1=Name=}} the interface name. '''mandatory'''<br />
* {{ic|1=Kind=}} e.g. ''bridge'', ''bond'', ''vlan'', ''veth'', ''sit'', etc. '''mandatory'''<br />
<br />
=== link files ===<br />
<br />
{{Remove|Duplicates the {{man|5|systemd.link}} man page.}}<br />
<br />
These files are an alternative to custom udev rules and will be applied by [[udev]] as the device appears. They have two sections: {{ic|[Match]}} and {{ic|[Link]}}. Below are commonly configured keys for each section. See {{man|5|systemd.link}} for more information and examples.<br />
<br />
{{Tip|Use {{ic|udevadm test-builtin net_setup_link /sys/path/to/network/device}} as the root user to diagnose problems with ''.link'' files.}}<br />
<br />
==== [Match] section ====<br />
<br />
* {{ic|1=MACAddress=}} the MAC address<br />
* {{ic|1=Host=}} the host name<br />
* {{ic|1=Virtualization=}} <br />
* {{ic|1=Type=}} the device type e.g. vlan<br />
<br />
==== [Link] section ====<br />
<br />
* {{ic|1=MACAddressPolicy=}} persistent or random addresses, or<br />
* {{ic|1=MACAddress=}} a specific address<br />
* {{ic|1=NamePolicy=}} list of policies by which the interface name should be set, e.g. kernel, keep<br />
<br />
{{Note|the system {{ic|/usr/lib/systemd/network/99-default.link}} is generally sufficient for most of the basic cases.}}<br />
<br />
== Usage with containers ==<br />
<br />
''systemd-networkd'' can provide fully automatic configuration of networking for [[systemd-nspawn]] containers when it is used on the host system as well as inside the container. See [[systemd-nspawn#Networking]] for a comprehensive overview.<br />
<br />
For the examples below, <br />
* we will limit the output of the {{ic|ip a}} command to the concerned interfaces.<br />
* we assume the ''host'' is your main OS you are booting to and the ''container'' is your guest virtual machine.<br />
* all interface names and IP addresses are only examples.<br />
<br />
=== Network bridge with DHCP ===<br />
<br />
==== Bridge interface ====<br />
<br />
First, create a virtual [[bridge]] interface. We tell systemd to create a device named ''br0'' that functions as an ethernet bridge.<br />
<br />
{{hc|/etc/systemd/network/''mybridge''.netdev|2=<br />
[NetDev]<br />
Name=br0<br />
Kind=bridge}}<br />
<br />
{{Tip|''systemd-networkd'' assigns a MAC address generated based on the interface name and the machine ID to the bridge. This may cause connection issues, for example in case of routing based on MAC filtering. To circumvent such problems you may assign a MAC address to your bridge, probably the same as your physical device, adding the line {{ic|1=MACAddress=xx:xx:xx:xx:xx:xx}} in the ''NetDev'' section above.}}<br />
<br />
[[Restart]] {{ic|systemd-networkd.service}} to have systemd create the bridge.<br />
<br />
To see the newly created bridge on the host and on the container, type:<br />
<br />
{{hc|$ ip a|<br />
3: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default <br />
link/ether ae:bd:35:ea:0c:c9 brd ff:ff:ff:ff:ff:ff<br />
}}<br />
<br />
Note that the interface ''br0'' is listed but is still DOWN at this stage.<br />
<br />
==== Bind Ethernet to bridge ====<br />
<br />
The next step is to add to the newly created bridge a network interface. In the example below, we add any interface that matches the name ''en*'' into the bridge ''br0''.<br />
<br />
{{hc|/etc/systemd/network/''bind''.network|2=<br />
[Match]<br />
Name=en*<br />
<br />
[Network]<br />
Bridge=br0<br />
}}<br />
<br />
The Ethernet interface must not have DHCP or an IP address associated as the bridge requires an interface to bind to with no IP: modify the corresponding {{ic|/etc/systemd/network/''MyEth''.network}} accordingly to remove the addressing.<br />
<br />
==== Bridge network ====<br />
<br />
Now that the bridge has been created and has been bound to an existing network interface, the IP configuration of the bridge interface must be specified. This is defined in a third ''.network'' file, the example below uses DHCP.<br />
<br />
{{hc|/etc/systemd/network/''mybridge''.network|2=<br />
[Match]<br />
Name=br0<br />
<br />
[Network]<br />
DHCP=ipv4}}<br />
<br />
==== Configure the container ====<br />
<br />
Use the {{ic|1=--network-bridge=br0}} option when starting the container. See [[systemd-nspawn#Use a network bridge]] for details.<br />
<br />
==== Result ====<br />
<br />
* on host<br />
<br />
{{hc|$ ip a|<br />
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <br />
link/ether 14:da:e9:b5:7a:88 brd ff:ff:ff:ff:ff:ff<br />
inet 192.168.1.87/24 brd 192.168.1.255 scope global br0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::16da:e9ff:feb5:7a88/64 scope link <br />
valid_lft forever preferred_lft forever<br />
6: vb-''MyContainer'': <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000<br />
link/ether d2:7c:97:97:37:25 brd ff:ff:ff:ff:ff:ff<br />
inet6 fe80::d07c:97ff:fe97:3725/64 scope link <br />
valid_lft forever preferred_lft forever<br />
}}<br />
<br />
* on container<br />
<br />
{{hc|$ ip a|<br />
2: host0: <BROADCAST,MULTICAST,ALLMULTI,AUTOMEDIA,NOTRAILERS,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />
link/ether 5e:96:85:83:a8:5d brd ff:ff:ff:ff:ff:ff<br />
inet 192.168.1.73/24 brd 192.168.1.255 scope global host0<br />
valid_lft forever preferred_lft forever<br />
inet6 fe80::5c96:85ff:fe83:a85d/64 scope link <br />
valid_lft forever preferred_lft forever<br />
}}<br />
<br />
==== Notice ====<br />
<br />
* we have now one IP address for {{ic|br0}} on the host, and one for {{ic|host0}} in the container<br />
* two new interfaces have appeared: {{ic|vb-''MyContainer''}} in the host and {{ic|host0}} in the container. This comes as a result of the {{ic|1=--network-bridge=br0}} option as explained in [[systemd-nspawn#Use a network bridge]] for details.<br />
* the DHCP address on {{ic|host0}} comes from the system {{ic|/usr/lib/systemd/network/80-container-host0.network}} file.<br />
* on host<br />
<br />
{{Out of date|''brctl'' is deprecated, use {{ic|bridge link}}. See [[Network bridge#With iproute2]].}}<br />
{{hc|$ brctl show|<br />
bridge name bridge id STP enabled interfaces<br />
br0 8000.14dae9b57a88 no enp7s0<br />
vb-''MyContainer''<br />
}}<br />
<br />
the above command output confirms we have a bridge with two interfaces binded to.<br />
<br />
* on host<br />
<br />
{{hc|$ ip route|<br />
default via 192.168.1.254 dev br0 <br />
192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.87<br />
}}<br />
<br />
* on container<br />
<br />
{{hc|$ ip route|<br />
default via 192.168.1.254 dev host0 <br />
192.168.1.0/24 dev host0 proto kernel scope link src 192.168.1.73<br />
}}<br />
<br />
the above command outputs confirm we have activated {{ic|br0}} and {{ic|host0}} interfaces with an IP address and Gateway 192.168.1.254. The gateway address has been automatically grabbed by ''systemd-networkd''.<br />
<br />
=== Network bridge with static IP addresses ===<br />
<br />
Setting a static IP address for each device can be helpful in case of deployed web services (e.g FTP, http, SSH). Each device will keep the same MAC address across reboots if your system {{ic|/usr/lib/systemd/network/99-default.link}} file has the {{ic|1=MACAddressPolicy=persistent}} option (it has by default). Thus, you will easily route any service on your Gateway to the desired device.<br />
<br />
The following configuration needs to be done for this setup:<br />
<br />
* on host <br />
<br />
The configuration is very similar to the [[#Network bridge with DHCP]] section. First, a virtual bridge interface needs to be created and the main physical interface needs to be bound to it. This task can be accomplished with the following two files, with contents equal to those available in the DHCP section.<br />
<br />
/etc/systemd/network/''MyBridge''.netdev<br />
/etc/systemd/network/''MyEth''.network<br />
<br />
Next, you need to configure the IP and DNS of the newly created virtual bridge interface. For example:<br />
<br />
{{hc|/etc/systemd/network/''MyBridge''.network|<nowiki><br />
[Match]<br />
Name=br0<br />
<br />
[Network]<br />
DNS=192.168.1.254<br />
Address=192.168.1.87/24<br />
Gateway=192.168.1.254<br />
</nowiki>}}<br />
<br />
* on container<br />
<br />
To get configure a static IP address on the container, we need to override the system {{ic|/usr/lib/systemd/network/80-container-host0.network}} file, which provides a DHCP configuration for the {{ic|host0}} network interface of the container. This can be done by placing the configuration into {{ic|/etc/systemd/network/80-container-host0.network}}. For example:<br />
<br />
{{hc|/etc/systemd/network/80-container-host0.network|2=<br />
[Match]<br />
Name=host0<br />
<br />
[Network]<br />
DNS=192.168.1.254<br />
Address=192.168.1.94/24<br />
Gateway=192.168.1.254<br />
}}<br />
<br />
Make sure that {{ic|systemd-networkd.service}} is [[enabled]] in the container.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Interface and desktop integration ===<br />
<br />
''systemd-networkd'' does not have a proper interactive management interface neither via [[command-line shell]] nor graphical.<br />
<br />
Still, some tools are available to either display the current state of the network, receive notifications or interact with the wireless configuration:<br />
<br />
* ''networkctl'' (via CLI) offers a simple dump of the network interface states.<br />
* When ''networkd'' is configured with [[wpa_supplicant]], both ''wpa_cli'' and ''wpa_gui'' offer the ability to associate and configure WLAN interfaces dynamically.<br />
* {{AUR|networkd-notify-git}} can generate simple notifications in response to network interface state changes (such as connection/disconnection and re-association).<br />
* The {{AUR|networkd-dispatcher}} daemon allows executing scripts in response to network interface state changes, similar to ''NetworkManager-dispatcher''.<br />
* As for the DNS resolver ''systemd-resolved'', information about current DNS servers can be visualized with {{ic|resolvectl status}}.<br />
<br />
=== Configuring static IP or DHCP based on SSID (location) ===<br />
<br />
Often there is a situation where your home wireless network uses DHCP and office wireless network uses static IP. This mixed setup can be configured as follows:<br />
<br />
{{Note|Number in the file name decides the order in which the files are processed. You can [Match] based on SSID or BSSID or both.}}<br />
<br />
{{hc|/etc/systemd/network/24-wireless-office.network|<nowiki><br />
# special configuration for office WiFi network<br />
[Match]<br />
Name=wlp2s0<br />
SSID=office_ap_name<br />
#BSSID=aa:bb:cc:dd:ee:ff<br />
<br />
[Network]<br />
Address=10.1.10.9/24<br />
Gateway=10.1.10.1<br />
DNS=10.1.10.1<br />
#DNS=8.8.8.8<br />
</nowiki>}}<br />
<br />
{{hc|/etc/systemd/network/25-wireless-dhcp.network|<nowiki><br />
# use DHCP for any other WiFi network<br />
[Match]<br />
Name=wlp2s0<br />
<br />
[Network]<br />
DHCP=ipv4<br />
</nowiki>}}<br />
<br />
=== Bonding a wired and wireless interface ===<br />
<br />
See also [[Wireless bonding]].<br />
<br />
Bonding allows connection sharing through multiple interfaces, so if e.g. the wired interface is unplugged, the wireless is still connected and the network connectivity remains up seamlessly.<br />
<br />
Create a bond interface. In this case the mode is ''active-backup'', which means packets are routed through a secondary interface if the primary interface goes down.<br />
<br />
{{hc|/etc/systemd/network/30-bond0.netdev|<nowiki><br />
[NetDev]<br />
Name=bond0<br />
Kind=bond<br />
<br />
[Bond]<br />
Mode=active-backup<br />
PrimaryReselectPolicy=always<br />
MIIMonitorSec=1s<br />
</nowiki>}}<br />
<br />
Set the wired interface as the primary:<br />
<br />
{{hc|/etc/systemd/network/30-ethernet-bond0.network|<nowiki><br />
[Match]<br />
Name=enp0s25<br />
<br />
[Network]<br />
Bond=bond0<br />
PrimarySlave=true<br />
</nowiki>}}<br />
<br />
Set the wireless as the secondary:<br />
<br />
{{hc|/etc/systemd/network/30-wifi-bond0.network|<nowiki><br />
[Match]<br />
Name=wlan0<br />
<br />
[Network]<br />
Bond=bond0<br />
</nowiki>}}<br />
<br />
Configure the bond interface as you would a normal interface:<br />
<br />
{{hc|/etc/systemd/network/30-bond0.network|<nowiki><br />
[Match]<br />
Name=bond0<br />
<br />
[Network]<br />
DHCP=ipv4<br />
</nowiki>}}<br />
<br />
Now if the wired network is unplugged, the connection should remain through the wireless:<br />
<br />
{{hc|$ networkctl|<nowiki><br />
IDX LINK TYPE OPERATIONAL SETUP <br />
1 lo loopback carrier unmanaged <br />
2 enp0s25 ether no-carrier configured<br />
3 bond0 bond degraded-carrier configured<br />
5 wlan0 wlan enslaved configured<br />
<br />
4 links listed.<br />
</nowiki>}}<br />
<br />
=== Speeding up TCP slow-start ===<br />
<br />
On a higher bandwidth link with moderate latency (typically a home Internet connection that is above 10 Mbit/s) the default settings for the TCP Slow Start algorithm are somewhat conservative. This issue exhibits as downloads starting slowly and taking a number of seconds to speed up before they reach the connection's full bandwidth. It is particularly noticeable with a pacman upgrade, where each package downloaded starts off slowly and often finishes before it has reached the connection's full speed.<br />
<br />
These settings can be adjusted to make TCP connections start with larger window sizes than the defaults, avoiding the time it takes for them to automatically increase on each new TCP connection[https://www.cdnplanet.com/blog/tune-tcp-initcwnd-for-optimum-performance/]. While this will usually decrease performance on slow connections (or if the values are increased too far) due to having to retransmit a larger number of lost packets, they can substantially increase performance on connections with sufficient bandwidth.<br />
<br />
It is important to benchmark before and after changing these values to ensure it is improving network speed and not reducing it. If you are not seeing downloads begin slowly and gradually speed up, then there is no need to change these values as they are already optimal for your connection speed. When benchmarking, be sure to test against both a high speed and low speed remote server to ensure you are not speeding up access to fast machines at the expense of making access to slow servers even slower.<br />
<br />
To adjust these values, edit the ''.network'' file for the connection:<br />
<br />
{{hc|/etc/systemd/network/eth0.network|2=<br />
[Match]<br />
Name=eth0<br />
<br />
#[Network]<br />
#Gateway=... <-- Remove this if you have it, and put it in the Gateway= line below<br />
<br />
[Route]<br />
# This will apply to the gateway supplied via DHCP. If you manually specify<br />
# your gateway, put it here instead.<br />
Gateway=_dhcp4<br />
<br />
# The defaults for these values is 10. They are a multiple of the MSS (1460 bytes).<br />
InitialCongestionWindow=10<br />
InitialAdvertisedReceiveWindow=10<br />
}}<br />
<br />
The defaults of {{ic|10}} work well for connections slower than 10 Mbit/s. For a 100 Mbit/s connection, a value of {{ic|30}} works well. The manual page {{man|5|systemd.network|[ROUTE] SECTION OPTIONS}} says a value of {{ic|100}} is considered excessive.<br />
<br />
If the [[sysctl]] setting {{ic|net.ipv4.tcp_slow_start_after_idle}} is enabled then the connection will return to these initial settings after it has been idle for some time (and often a very small amount of time). If this setting is disabled then the connection will maintain a higher window if a larger one was negotiated during packet transfer. Regardless of the setting, each new TCP connection will begin with the {{ic|Initial*}} settings set above.<br />
<br />
The sysctl setting {{ic|net.ipv4.tcp_congestion_control}} is not directly related to these values, as it controls how the congestion and receive windows are adjusted while a TCP link is active, and particularly when the path between the two hosts is congested and throughput must be reduced. The above {{ic|Initial*}} values simply set the default window values selected for each new connection, before any congestion algorithm takes over and adjusts them as needed. Setting higher initial values simply shortcuts some negotiation while the congestion algorithm tries to find the optimum values (or, conversely, setting the wrong initial values adds additional negotiation time while the congestion algorithm works towards correcting them, slowing down each newly established TCP connection for a few seconds extra).<br />
<br />
== See also ==<br />
<br />
* {{man|8|systemd-networkd}}<br />
* [https://web.archive.org/web/20201111213850/https://coreos.com/blog/intro-to-systemd-networkd/ Tom Gundersen posts on Core OS blog]<br />
* [https://bbs.archlinux.org/viewtopic.php?pid=1393759#p1393759 How to set up systemd-networkd with wpa_supplicant] (WonderWoofy's walkthrough on Arch forums)</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=WireGuard&diff=717552WireGuard2022-02-09T19:00:27Z<p>Cmsigler: Fixed name of sample wg config file under Low MTU section -- it should not appear to be a file located in the root of the filesystem</p>
<hr />
<div>[[Category:Virtual Private Network]]<br />
[[ja:WireGuard]]<br />
[[zh-hans:WireGuard]]<br />
From the [https://www.wireguard.com/ WireGuard] project homepage:<br />
:WireGuard is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable.<br />
<br />
A rough introduction to the main concepts used in this article can be found on [https://www.wireguard.com/ WireGuard's] project homepage. WireGuard is in the Linux kernel since March 2020.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|wireguard-tools}} package for userspace utilities.<br />
<br />
Alternatively, various network managers provide support for WireGuard, provided that peer keys are available. See [[#Persistent configuration]] for details.<br />
<br />
=== Graphical clients ===<br />
<br />
* {{App|Qomui|OpenVPN GUI with advanced features and support for multiple providers.|https://github.com/corrad1nho/qomui|{{AUR|qomui}}}}<br />
<br />
== Usage ==<br />
<br />
{{Style|Useless section name – everything on this page is about WireGuard usage. Moving the 4 subsections to the top level would make sense.}}<br />
<br />
The commands below demonstrate how to set up a basic tunnel between two or more peers with the following settings:<br />
<br />
{| class="wikitable"<br />
! rowspan="2" |<br />
! colspan="3" | External (public) addresses<br />
! colspan="2" | Internal IP addresses<br />
! rowspan="2" | Port<br />
|-<br />
! Domain name<br />
! IPv4 address<br />
! IPv6 address<br />
! IPv4 address<br />
! IPv6 address<br />
|-<br />
! Peer A<br />
| <br />
| 198.51.100.101<br />
| 2001:db8:a85b:70a:ffd4:ec1b:4650:a001<br />
| 10.0.0.1/24<br />
| fdc9:281f:04d7:9ee9::1/64<br />
| UDP/51871<br />
|-<br />
! Peer B<br />
| peer-b.example<br />
| 203.0.113.102<br />
| 2001:db8:40f0:147a:80ad:3e88:f8e9:b002<br />
| 10.0.0.2/24<br />
| fdc9:281f:04d7:9ee9::2/64<br />
| UDP/51902<br />
|-<br />
! Peer C<br />
| <br />
| ''dynamic''<br />
| ''dynamic''<br />
| 10.0.0.3/24<br />
| fdc9:281f:04d7:9ee9::3/64<br />
| UDP/51993<br />
|}<br />
<br />
{{Tip|The same UDP port can be used for all peers.}}<br />
<br />
The external addresses should already exist. For example, if ICMP echo requests are not blocked, peer A should be able to [[ping]] peer B via its public IP address(es) and vice versa.<br />
<br />
The internal addresses will be new addresses, created either manually using the {{man|8|ip}} utility or by network management software, which will be used internally within the new WireGuard network. The following examples will use 10.0.0.0/24 and fdc9:281f:04d7:9ee9::/64 as the internal network. The {{ic|/24}} and {{ic|/64}} in the IP addresses is the [[Wikipedia:Classless Inter-Domain Routing#CIDR notation|CIDR]].<br />
<br />
=== Key generation ===<br />
<br />
Create a private and public key for each peer. If connecting dozens of peers optionally consider a vanity keypair to personalize the Base64 encoded public key string. See [[#Vanity keys]].<br />
<br />
To create a private key run:<br />
<br />
$ (umask 0077; wg genkey > peer_A.key)<br />
<br />
{{Note|It is recommended to only allow reading and writing access for the owner. The above alters the [[umask]] temporarily within a sub-shell to ensure that access (read/write permissions) is restricted to the owner.}}<br />
<br />
To create a public key:<br />
<br />
$ wg pubkey < peer_A.key > peer_A.pub<br />
<br />
Alternatively, do this all at once:<br />
<br />
$ wg genkey | (umask 0077 && tee peer_A.key) | wg pubkey > peer_A.pub<br />
<br />
One can also generate a pre-shared key to add an additional layer of symmetric-key cryptography to be mixed into the already existing public-key cryptography, for post-quantum resistance. A pre-shared key should be generated for each peer pair and should not be reused. For example, three interconnected peers, A, B, and, C will need three separate pre-shared keys, one for each peer pair.<br />
<br />
Generate a pre-shared key for each peer pair using the following command:<br />
<br />
$ wg genpsk > peer_A-peer_B.psk<br />
$ wg genpsk > peer_A-peer_C.psk<br />
$ wg genpsk > peer_B-peer_C.psk<br />
<br />
==== Vanity keys ====<br />
<br />
Currently, WireGuard does not support comments or attaching human-memorable names to keys. This makes identifying the key's owner difficult particularly when multiple keys are in use. One solution is to generate a public key that contains some familiar characters (perhaps the first few letters of the owner's name or of the hostname etc.), {{AUR|wireguard-vanity-address}} does this.<br />
<br />
=== Manual configuration ===<br />
<br />
==== Peer setup ====<br />
<br />
Manual setup is accomplished by using {{man|8|ip}} and {{man|8|wg}}.<br />
<br />
{{Style|These examples use the pre-shared keys which were introduced as ''optional'' in [[#Key generation]].}}<br />
<br />
'''Peer A setup:'''<br />
<br />
In this example peer A will listen on UDP port 51871 and will accept connection from peer B and C.<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.1/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::1/64 dev wg0<br />
# wg set wg0 listen-port 51871 private-key ''/path/to/''peer_A.key<br />
# wg set wg0 peer ''PEER_B_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_B.psk endpoint peer-b.example:51902 allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128<br />
# wg set wg0 peer ''PEER_C_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_C.psk allowed-ips 10.0.0.3/32,fdc9:281f:04d7:9ee9::3/128<br />
# ip link set wg0 up<br />
<br />
{{ic|''PEER_X_PUBLIC_KEY''}} should be the contents of {{ic|1=''peer_X''.pub}}.<br />
<br />
The keyword {{ic|allowed-ips}} is a list of addresses that will get routed to the peer. Make sure to specify at least one address range that contains the WireGuard connection's internal IP address(es).<br />
<br />
'''Peer B setup:'''<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.2/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::2/64 dev wg0<br />
# wg set wg0 listen-port 51902 private-key ''/path/to/''peer_B.key<br />
# wg set wg0 peer ''PEER_A_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_B.psk endpoint 198.51.100.101:51871 allowed-ips 10.0.0.1/32,fdc9:281f:04d7:9ee9::1/128<br />
# wg set wg0 peer ''PEER_C_PUBLIC_KEY'' preshared-key ''/path/to/''peer_B-peer_C.psk allowed-ips 10.0.0.3/32,fdc9:281f:04d7:9ee9::3/128<br />
# ip link set wg0 up<br />
<br />
'''Peer C setup:'''<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.3/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::3/64 dev wg0<br />
# wg set wg0 listen-port 51993 private-key ''/path/to/''peer_C.key<br />
# wg set wg0 peer ''PEER_A_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_C.psk endpoint 198.51.100.101:51871 allowed-ips 10.0.0.1/32,fdc9:281f:04d7:9ee9::1/128<br />
# wg set wg0 peer ''PEER_B_PUBLIC_KEY'' preshared-key ''/path/to/''peer_B-peer_C.psk endpoint peer-b.example:51902 allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128<br />
# ip link set wg0 up<br />
<br />
==== Additional routes ====<br />
<br />
To establish connections more complicated than point-to-point, additional setup is necessary.<br />
<br />
{{Expansion|Add a scenario: only peer A has a public IP address (i.e. ''endpoint''), peers B and C (which are generally behind a NAT) connect to peer A with {{ic|PersistentKeepalive}}, connections from peer B to peer C and vice versa are routed via peer A. Configuration: peers B and C have {{ic|10.0.0.0/24}} in {{ic|AllowedIPs}} for peer A, peer A must enable packet forwarding and masquerading via firewall rules, e.g. {{ic|iptables -A FORWARD -i wg+ -j ACCEPT}} and {{ic|iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wg0 -j MASQUERADE}}.}}<br />
<br />
===== Point-to-site =====<br />
<br />
To access the network of a peer, specify the network subnet(s) in {{ic|allowed-ips}} in the configuration of the peers who should be able to connect to it. E.g. {{ic|allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128,'''192.168.35.0/24,fd7b:d0bd:7a6e::/64'''}}.<br />
<br />
Make sure to also set up the [[Network configuration#Routing table|routing table]] with {{man|8|ip-route}}. E.g.:<br />
<br />
# ip route add 192.168.35.0/24 dev wg0<br />
# ip route add fd7b:d0bd:7a6e::/64 dev wg0<br />
<br />
===== Site-to-point =====<br />
<br />
{{Expansion|Add {{ic|ip route}} examples; add alternative using NAT; mention the situation when the ''site''-peer is the network's gateway.}}<br />
<br />
If the intent is to connect a device to a network with WireGuard peer(s), set up routes on each device so they know that the peer(s) are reachable via the device.<br />
<br />
{{Tip|Deploy routes network-wide by configuring them in the router.}}<br />
<br />
Enable IP forwarding on the peer through which other devices on the network will connect to WireGuard peer(s):<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
# sysctl -w net.ipv6.conf.all.forwarding=1<br />
<br />
{{Warning|Enabling IP forwarding without a properly configured [[firewall]] is a security risk.}}<br />
<br />
See [[sysctl#Configuration]] for instructions on how to set the ''sysctl'' parameters on boot.<br />
<br />
===== Site-to-site =====<br />
<br />
To connect two (or more) networks, apply both [[#Point-to-site]] and [[#Site-to-point]] on all ''sites''.<br />
<br />
===== Routing all traffic over WireGuard =====<br />
<br />
{{Expansion|Add instructions on how to ''route everything over VPN''.[https://www.wireguard.com/netns/] There is [[#systemd-networkd: routing all traffic over WireGuard]] already.}}<br />
<br />
==== DNS ====<br />
<br />
To use a peer as a DNS server, add its WireGuard tunnel IP address(es) to [[:/etc/resolv.conf]]. For example, to use peer B as the DNS server:<br />
<br />
{{hc|/etc/resolv.conf|<br />
nameserver fdc9:281f:04d7:9ee9::2<br />
nameserver 10.0.0.2<br />
}}<br />
<br />
{{Note|If a peer will act as a DNS server, make sure to use its WireGuard tunnel address(es) as the DNS server address(es) instead of another of its addresses from allowed IPs. Otherwise DNS lookups may fail.}}<br />
<br />
=== Basic checkups ===<br />
<br />
Invoking the {{man|8|wg}} command without parameters will give a quick overview of the current configuration.<br />
<br />
As an example, when peer A has been configured we are able to see its identity and its associated peers:<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51871<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
endpoint: 203.0.113.102:51902<br />
allowed ips: 10.0.0.2/32, fdc9:281f:04d7:9ee9::2<br />
<br />
peer: 2RzKFbGMx5g7fG0BrWCI7JIpGvcwGkqUaCoENYueJw4=<br />
endpoint: 192.0.2.103:51993<br />
allowed ips: 10.0.0.3/32, fdc9:281f:04d7:9ee9::3<br />
}}<br />
<br />
At this point one could reach the end of the tunnel. If the peers do not block ICMP echo requests, try [[ping]]ing a peer to test the connection between them.<br />
<br />
Using ICMPv4:<br />
<br />
$ ping 10.0.0.2<br />
<br />
Using ICMPv6:<br />
<br />
$ ping fdc9:281f:04d7:9ee9::2<br />
<br />
After transferring some data between peers, the {{ic|wg}} utility will show additional information:<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51871<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
endpoint: 203.0.113.102:51902<br />
allowed ips: 10.0.0.2/32, fdc9:281f:04d7:9ee9::2<br />
latest handshake: 5 seconds ago<br />
transfer: 1.24 KiB received, 1.38 KiB sent<br />
<br />
peer: 2RzKFbGMx5g7fG0BrWCI7JIpGvcwGkqUaCoENYueJw4=<br />
allowed ips: 10.0.0.3/32, fdc9:281f:04d7:9ee9::3<br />
}}<br />
<br />
=== Persistent configuration ===<br />
<br />
Persistent configuration can be achieved using {{ic|wg-quick@.service}}, which is shipped with {{Pkg|wireguard-tools}}, or using a network manager. Network managers that support WireGuard are [[systemd-networkd]], [[netctl]][https://gitlab.archlinux.org/archlinux/netctl/blob/master/docs/examples/wireguard], [[NetworkManager]] and [[ConnMan]][https://git.kernel.org/pub/scm/network/connman/connman.git/tree/doc/vpn-config-format.txt].<br />
<br />
{{Note|1=<nowiki></nowiki><br />
* [[netctl]] relies on {{man|8|wg}} from {{Pkg|wireguard-tools}} and {{ic|/etc/wireguard/''interfacename''.conf}} configuration files for establishing WireGuard connections.<br />
* [[ConnMan]] has a very limited support for WireGuard. It can connect to only one peer.[https://git.kernel.org/pub/scm/network/connman/connman.git/commit/?id=95b25140bec7c4d9b6ae4e479dc1b94b7d409b39]<br />
}}<br />
<br />
==== wg-quick ====<br />
<br />
{{man|8|wg-quick}} configures WireGuard tunnels using configuration files from {{ic|/etc/wireguard/''interfacename''.conf}}.<br />
<br />
The current WireGuard configuration can be saved by utilizing the {{man|8|wg}} utility's {{ic|showconf}} command. For example:<br />
<br />
# wg showconf wg0 > /etc/wireguard/wg0.conf<br />
<br />
To start a tunnel with a configuration file, use<br />
<br />
# wg-quick up ''interfacename''<br />
<br />
or use the systemd service—{{ic|wg-quick@''interfacename''.service}}. To start the tunnel at boot, [[enable]] the unit.<br />
<br />
{{Note|<br />
* Users configuring the WireGuard interface using ''wg-quick'', should make sure that no other [[network management]] software tries to manage it. To use [[NetworkManager]] and to not configure WireGuard interfaces with it, see [[#Routes are periodically reset]].<br />
* ''wg-quick'' adds additional configuration options to the configuration file format thus making it incompatible with {{man|8|wg|CONFIGURATION FILE FORMAT}}. See the {{man|8|wg-quick|CONFIGURATION}} man page for the configuration values in question. A ''wg''-compatible configuration file can be produced by using {{ic|wg-quick strip}}.<br />
* ''wg-quick'' does not provide a way to instruct [[resolvconf]] to set the WireGuard interface as ''private''. Even if there are search domains specified, all DNS queries from the system, not just those that match the search domains, will be sent to the DNS servers which are set in the WireGuard configuration.<br />
}}<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.1/24, fdc9:281f:04d7:9ee9::1/64<br />
ListenPort = 51871<br />
PrivateKey = ''PEER_A_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.2/32, fdc9:281f:04d7:9ee9::2/128<br />
Endpoint = peer-b.example:51902<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.3/32, fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
* To ''route all traffic'' through the tunnel to a specific peer, add the [[Wikipedia:Default route|default route]] ({{ic|0.0.0.0/0}} for IPv4 and {{ic|::/0}} for IPv6) to {{ic|AllowedIPs}}. E.g. {{ic|1=AllowedIPs = 0.0.0.0/0, ::/0}}. wg-quick will automatically take care of setting up correct routing and fwmark[https://www.wireguard.com/netns/#routing-all-your-traffic] so that networking still functions.<br />
* To use a peer as a DNS server, set {{ic|1=DNS = ''wireguard_internal_ip_address_of_peer''}} in the {{ic|[Interface]}} section. [[Wikipedia:Search domain|Search domains]] are also set with the {{ic|1=DNS = }} option. Separate all values in the list with commas.<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.2/24, fdc9:281f:04d7:9ee9::2/64<br />
ListenPort = 51902<br />
PrivateKey = ''PEER_B_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.1/32, fdc9:281f:04d7:9ee9::1/128<br />
Endpoint = 198.51.100.101:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
PresharedKey = ''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.3/32, fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.3/24, fdc9:281f:04d7:9ee9::3/64<br />
ListenPort = 51993<br />
PrivateKey = ''PEER_C_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.1/32, fdc9:281f:04d7:9ee9::1/128<br />
Endpoint = 198.51.100.101:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
PresharedKey = ''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.2/32, fdc9:281f:04d7:9ee9::2/128<br />
Endpoint = peer-b.example:51902<br />
}}<br />
<br />
==== systemd-networkd ====<br />
<br />
[[systemd-networkd]] has native support for setting up WireGuard interfaces. An example is provided in the {{man|5|systemd.netdev|EXAMPLES}} man page.<br />
<br />
{{Note|Routing all DNS over WireGuard (i.e. {{ic|1=Domains=~.}}) will prevent the DNS resolution of endpoints.}}<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=''PEER_A_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::2/128<br />
Endpoint=peer-b.example:51902<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_C_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.3/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fdc9:281f:04d7:9ee9::1/64<br />
}}<br />
<br />
* To use a peer as a DNS server, specify its WireGuard tunnel's IP address(es) in the ''.network'' file using the {{ic|1=DNS=}} option. For [[Wikipedia:Search domain|search domains]] use the {{ic|1=Domains=}} option. See {{man|5|systemd.network|[NETWORK] SECTION OPTIONS}} for details.<br />
* To use a peer as the '''only''' DNS server, then in the ''.network'' file's {{ic|[Network]}} section set {{ic|1=DNSDefaultRoute=true}} and add {{ic|~.}} to {{ic|1=Domains=}} option.<br />
* To route additional subnets add them as {{ic|[Route]}} sections in the ''.network'' file. For example:<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
...<br />
[Route]<br />
Destination=192.168.35.0/24<br />
Scope=link<br />
<br />
[Route]<br />
Destination=fd7b:d0bd:7a6e::/64<br />
Scope=link<br />
}}<br />
<br />
{{Warning|In order to prevent the leaking of private keys, it is recommended to set the permissions of the ''.netdev'' file:<br />
<br />
# chown root:systemd-network /etc/systemd/network/99-*.netdev<br />
# chmod 0640 /etc/systemd/network/99-*.netdev<br />
<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=''PEER_B_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.1/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::1/128<br />
Endpoint=198.51.100.101:51871<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_C_PUBLIC_KEY''<br />
PresharedKey=''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.3/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/24<br />
Address=fdc9:281f:04d7:9ee9::2/64<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51993<br />
PrivateKey=''PEER_C_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.1/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::1/128<br />
Endpoint=198.51.100.101:51871<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::2/128<br />
Endpoint=peer-b.example:51902<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.3/24<br />
Address=fdc9:281f:04d7:9ee9::3/64<br />
}}<br />
<br />
==== systemd-networkd: routing all traffic over WireGuard ====<br />
<br />
In this example Peer B connects to peer A with public IP address. Peer B routes all its traffic over WireGuard tunnel and uses Peer A for handling DNS requests.<br />
<br />
'''Peer A setup'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=''PEER_A_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
}}<br />
<br />
{{Note|You must still enable [[Internet sharing#Enable packet forwarding|IP Forwarding]] and IP masquerading rules on Peer A in order to provide working internet to Peer B.<br />
<br />
Assumes [[ufw]], but you could do the same with [[iptables]] by using the rules outlined in the [[#Server config|Server config]] section:<br />
<br />
$ ufw route allow in on wg0 out on enp5s0<br />
<br />
{{hc|/etc/ufw/before.rules|2=<br />
*nat<br />
:POSTROUTING ACCEPT [0:0]<br />
-A POSTROUTING -s 10.0.0.0/24 -o enp5s0 -j MASQUERADE<br />
COMMIT<br />
}}<br />
<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=''PEER_B_PRIVATE_KEY''<br />
FirewallMark=0x8888<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=0.0.0.0/0<br />
Endpoint=198.51.100.101:51871<br />
}}<br />
<br />
{{hc|/etc/systemd/network/50-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/24<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=true<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x8888<br />
InvertRule=true<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=true<br />
Table=1000<br />
}}<br />
<br />
==== Netctl ====<br />
<br />
[[Netctl]] has native support for setting up WireGuard interfaces. A typical set of WireGuard netctl profile configuration files would look like this:<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer A"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.1/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51871<br />
PrivateKey = ''PEER_A_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.2/32<br />
Endpoint = peer-b.example:51902<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.3/32<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer B"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.2/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51902<br />
PrivateKey = ''PEER_B_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.1/32<br />
Endpoint = peer-a.example:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.3/32<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer C"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.3/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51993<br />
PrivateKey = ''PEER_C_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.1/32<br />
Endpoint = peer-a.example:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.2/32<br />
Endpoint = peer-b.example:51902<br />
}}<br />
<br />
Then start and/or enable wg0 interface on every participating peer as needed, ie.<br />
<br />
# netctl start wg0<br />
<br />
To implement persistent site-to-peer, peer-to-site or site-to-site type of connection with WireGuard and Netctl, just add appropriate {{ic|1=Routes=}} line into the netctl profile config file and add this network to {{ic|AllowedIPs}} in the WireGuard profile, eg. {{ic|1=Routes=('192.168.10.0/24 dev wg0')}} in the {{ic|/etc/netctl/wg0}} and {{ic|1=AllowedIPs=10.0.0.1/32, 192.168.10.0/24}} in {{ic|/etc/wireguard/wg0.conf}} and then do not forget to enable [[Internet sharing#Enable packet forwarding|IP forwarding]].<br />
<br />
==== NetworkManager ====<br />
<br />
[[NetworkManager]] has native support for setting up WireGuard interfaces. For all details about WireGuard usage in NetworkManager, read Thomas Haller's blog post—[https://blogs.gnome.org/thaller/2019/03/15/wireguard-in-networkmanager/ WireGuard in NetworkManager].<br />
<br />
{{Tip|NetworkManager can import a wg-quick configuration file. E.g.: {{bc|# nmcli connection import type wireguard file /etc/wireguard/wg0.conf}}}}<br />
<br />
{{Note|nmcli can create a WireGuard connection profile, but it does not support configuring peers. See [https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/358 NetworkManager issue 358].}}<br />
<br />
The following examples configure WireGuard via the keyfile format ''.nmconnection'' files. See {{man|5|nm-settings-keyfile}} and {{man|5|nm-settings}} for an explanation on the syntax and available options.<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51871<br />
private-key=''PEER_A_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_B_PUBLIC_KEY'']<br />
endpoint=peer-b.example:51902<br />
preshared-key=''PEER_A-PEER_B-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.2/32;fdc9:281f:04d7:9ee9::2/128;<br />
<br />
[wireguard-peer.''PEER_C_PUBLIC_KEY'']<br />
preshared-key=''PEER_A-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.3/32;fdc9:281f:04d7:9ee9::3/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.1/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::1/64<br />
method=manual<br />
}}<br />
<br />
* To ''route all traffic'' through the tunnel to a specific peer, add the [[Wikipedia:Default route|default route]] ({{ic|0.0.0.0/0}} for IPv4 and {{ic|::/0}} for IPv6) to {{ic|wireguard-peer.''PEER_X_PUBLIC_KEY''.allowed-ips}}. E.g. {{ic|1=wireguard-peer.''PEER_B_PUBLIC_KEY''.allowed-ips=0.0.0.0/0;::/0;}}. Special handling of the default route in WireGuard connections is supported since NetworkManager 1.20.0.<br />
* To use a peer as a DNS server, specify its WireGuard tunnel's IP address(es) with the {{ic|ipv4.dns}} and {{ic|ipv6.dns}} settings. [[Wikipedia:Search domain|Search domains]] can be specified with the {{ic|1=ipv4.dns-search=}} and {{ic|1=ipv6.dns-search=}} options. See {{man|5|nm-settings}} for more details. For example, using the keyfile format:<br />
<br />
{{bc|1=<br />
...<br />
[ipv4]<br />
...<br />
dns=10.0.0.2;<br />
dns-search=corp;<br />
...<br />
[ipv6]<br />
...<br />
dns=fdc9:281f:04d7:9ee9::2;<br />
dns-search=corp;<br />
...<br />
}}<br />
<br />
To use a peer as the '''only''' DNS server, set a negative DNS priority (e.g. {{ic|1=dns-priority=-1}}) and add {{ic|~.}} to the {{ic|1=dns-search=}} settings.<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51902<br />
private-key=''PEER_B_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_A_PUBLIC_KEY'']<br />
endpoint=198.51.100.101:51871<br />
preshared-key=''PEER_A-PEER_B-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.1/32;fdc9:281f:04d7:9ee9::1/128;<br />
<br />
[wireguard-peer.''PEER_C_PUBLIC_KEY'']<br />
preshared-key=''PEER_B-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.3/32;fdc9:281f:04d7:9ee9::3/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.2/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::2/64<br />
method=manual<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51993<br />
private-key=''PEER_C_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_A_PUBLIC_KEY'']<br />
endpoint=198.51.100.101:51871<br />
preshared-key=''PEER_A-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.1/32;fdc9:281f:04d7:9ee9::1/128;<br />
<br />
[wireguard-peer.''PEER_B_PUBLIC_KEY'']<br />
endpoint=peer-b.example:51902<br />
preshared-key=''PEER_B-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.2/32;fdc9:281f:04d7:9ee9::2/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.3/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::3/64<br />
method=manual<br />
}}<br />
<br />
== Specific use-case: VPN server ==<br />
<br />
{{Merge|#Routing all traffic over WireGuard|Same use case.}}<br />
<br />
{{Note|Usage of the terms "server" and "client" were purposefully chosen in this section specifically to help new users/existing OpenVPN users become familiar with the construction of WireGuard's configuration files. WireGuard documentation simply refers to both of these concepts as "peers."}}<br />
<br />
The purpose of this section is to set up a WireGuard "server" and generic "clients" to enable access to the server/network resources through an encrypted and secured tunnel like [[OpenVPN]] and others. The "server" runs on Linux and the "clients" can run on any number of platforms (the WireGuard Project offers apps on both iOS and Android platforms in addition to Linux, Windows and MacOS). See the official project [https://www.wireguard.com/install/ install link] for more.<br />
<br />
{{Tip|Instead of using {{pkg|wireguard-tools}} for server/client configuration, one may also use [[#systemd-networkd|systemd-networkd]] native WireGuard support.}}<br />
<br />
=== Server ===<br />
<br />
{{Merge|#Site-to-point|Same use case.}}<br />
<br />
On the peer that will act as the "server", first enable IPv4 forwarding using [[sysctl]]:<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, add {{ic|1=net.ipv4.ip_forward = 1}} to {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
A properly configured [[firewall]] is ''HIGHLY recommended'' for any Internet-facing device.<br />
<br />
If the server has a public IP configured, be sure to:<br />
<br />
* Allow UDP traffic on the specified port(s) on which WireGuard will be running (for example allowing traffic on {{ic|51820/UDP}}).<br />
* Setup the forwarding policy for the firewall if it is not included in the WireGuard config for the interface itself {{ic|/etc/wireguard/wg0.conf}}. The example below should have the iptables rules and work as-is.<br />
<br />
If the server is behind NAT, be sure to forward the specified port(s) on which WireGuard will be running (for example, {{ic|51820/UDP}}) from the router to the WireGuard server.<br />
<br />
=== Key generation ===<br />
<br />
Generate key pairs for the server and for each client as explained in [[#Key generation]].<br />
<br />
=== Server config ===<br />
<br />
Create the "server" config file:<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.200.200.1/24<br />
ListenPort = 51820<br />
PrivateKey = ''SERVER_PRIVATE_KEY''<br />
<br />
# substitute ''eth0'' in the following lines to match the Internet-facing interface<br />
# if the server is behind a router and receives traffic via NAT, these iptables rules are not needed<br />
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE<br />
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE<br />
<br />
[Peer]<br />
# foo<br />
PublicKey = ''PEER_FOO_PUBLIC_KEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
AllowedIPs = 10.200.200.2/32<br />
<br />
[Peer]<br />
# bar<br />
PublicKey = ''PEER_BAR_PUBLIC_KEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
AllowedIPs = 10.200.200.3/32<br />
}}<br />
<br />
Additional peers ("clients") can be listed in the same format as needed. Each peer requires the {{ic|PublicKey}} to be set. However, specifying {{ic|PresharedKey}} is optional.<br />
<br />
Notice that the {{ic|Address}} has a netmask of {{ic|/24}} and the clients on {{ic|AllowedIPs}} {{ic|/32}}. The clients only use their IP and the server only sends back their respective address.<br />
<br />
The interface can be managed manually using {{man|8|wg-quick}} or using a [[systemd]] service managed via {{man|1|systemctl}}.<br />
<br />
The interface may be brought up using {{ic|wg-quick up wg0}} respectively by [[start|starting]] and potentially [[enable|enabling]] the interface via {{ic|wg-quick@''interface''.service}}, e.g. {{ic|wg-quick@wg0.service}}. To close the interface use {{ic|wg-quick down wg0}} respectively [[stop]] {{ic|wg-quick@''interface''.service}}.<br />
<br />
=== Client config ===<br />
<br />
Create the corresponding "client" config file(s):<br />
<br />
{{hc|foo.conf|2=<br />
[Interface]<br />
Address = 10.200.200.2/32<br />
PrivateKey = ''PEER_FOO_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
<br />
[Peer]<br />
PublicKey = ''SERVER_PUBLICKEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
Endpoint = my.ddns.example.com:51820<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
}}<br />
<br />
{{hc|bar.conf|2=<br />
[Interface]<br />
Address = 10.200.200.3/32<br />
PrivateKey = ''PEER_BAR_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
<br />
[Peer]<br />
PublicKey = ''SERVER_PUBLICKEY''<br />
PresharedKey = ''PRE-SHARED KEY''<br />
Endpoint = my.ddns.example.com:51820<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
}}<br />
<br />
Using the catch-all {{ic|1=AllowedIPs = 0.0.0.0/0, ::/0}} will forward all IPv4 ({{ic|0.0.0.0/0}}) and IPv6 ({{ic|::/0}}) traffic over the VPN.<br />
<br />
{{Note|Users of [[NetworkManager]], may need to [[enable]] the {{ic|NetworkManager-wait-online.service}} and users of [[systemd-networkd]] may need to [[enable]] the {{ic|systemd-networkd-wait-online.service}} to wait until devices are network-ready before attempting a WireGuard connection.}}<br />
<br />
== Testing the tunnel ==<br />
<br />
{{Merge|#Basic checkups|Same topic.}}<br />
<br />
Once a tunnel has been established, one can use [[netcat]] to send traffic through it to test out throughput, CPU usage, etc.<br />
On one side of the tunnel, run {{ic|nc}} in listen mode and on the other side, pipe some data from {{ic|/dev/zero}} into {{ic|nc}} in sending mode.<br />
<br />
In the example below, port 2222 is used for the traffic (be sure to allow traffic on port 2222 if using a firewall).<br />
<br />
On one side of the tunnel listen for traffic:<br />
<br />
$ nc -vvlnp 2222<br />
<br />
On the other side of the tunnel, send some traffic:<br />
<br />
$ dd if=/dev/zero bs=1024K count=1024 | nc -v 10.0.0.203 2222<br />
<br />
Status can be monitored using {{ic|wg}} directly.<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51820<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
preshared key: (hidden)<br />
endpoint: 192.168.1.216:53207<br />
allowed ips: 10.0.0.0/0<br />
latest handshake: 1 minutes, 17 seconds ago<br />
transfer: 56.43 GiB received, 1.06 TiB sent<br />
}}<br />
<br />
== Tips and tricks ==<br />
<br />
=== Store private keys in encrypted form ===<br />
<br />
It may be desirable to store private keys in encrypted form, such as through use of {{pkg|pass}}. Just replace the PrivateKey line under [Interface] in the configuration file with:<br />
<br />
PostUp = wg set %i private-key <(su user -c "export PASSWORD_STORE_DIR=/path/to/your/store/; pass WireGuard/private-keys/%i")<br />
<br />
where ''user'' is the Linux username of interest. See the {{man|8|wg-quick}} man page for more details.<br />
<br />
=== Endpoint with changing IP ===<br />
<br />
After resolving a server's domain, WireGuard [https://lists.zx2c4.com/pipermail/wireguard/2017-November/002028.html will not check for changes in DNS again].<br />
<br />
If the WireGuard server is frequently changing its IP-address due DHCP, Dyndns, IPv6, etc., any WireGuard client is going to lose its connection, until its endpoint is updated via something like {{ic|wg set "$INTERFACE" peer "$PUBLIC_KEY" endpoint "$ENDPOINT"}}.<br />
<br />
Also be aware, if the endpoint is ever going to change its address (for example when moving to a new provider/datacenter), just updating DNS will not be enough, so periodically running reresolve-dns might make sense on any DNS-based setup.<br />
<br />
Luckily, {{Pkg|wireguard-tools}} provides an example script {{ic|/usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh}}, that parses WG configuration files and automatically resets the endpoint address.<br />
<br />
One needs to run the {{ic|/usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh /etc/wireguard/wg.conf}} periodically to recover from an endpoint that has changed its IP.<br />
<br />
One way of doing so is by updating all WireGuard endpoints once every thirty seconds[https://git.zx2c4.com/WireGuard/tree/contrib/examples/reresolve-dns/README] via a systemd timer:<br />
<br />
{{hc|/etc/systemd/system/wireguard_reresolve-dns.timer|2=<br />
[Unit]<br />
Description=Periodically reresolve DNS of all WireGuard endpoints<br />
<br />
[Timer]<br />
OnCalendar=*:*:0/30<br />
<br />
[Install]<br />
WantedBy=timers.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/wireguard_reresolve-dns.service|2=<br />
[Unit]<br />
Description=Reresolve DNS of all WireGuard endpoints<br />
Wants=network-online.target<br />
After=network-online.target<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/bin/sh -c 'for i in /etc/wireguard/*.conf; do /usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh "$i"; done'<br />
}}<br />
<br />
Afterwards [[enable]] and [[start]] {{ic|wireguard_reresolve-dns.timer}}<br />
<br />
=== Generate QR code ===<br />
<br />
If the client is a mobile device such as a phone, {{Pkg|qrencode}} can be used to generate client's configuration QR code and display it in terminal:<br />
<br />
$ qrencode -t ansiutf8 -r ''client.conf''<br />
<br />
=== Enable debug logs ===<br />
<br />
When using the Linux kernel module on a kernel that supports dynamic debugging, debugging information can be written into the kernel ring buffer (viewable with [[dmesg]] and [[journalctl]]) by running:<br />
<br />
# modprobe wireguard<br />
# echo module wireguard +p > /sys/kernel/debug/dynamic_debug/control<br />
<br />
=== Reload peer (server) configuration ===<br />
<br />
In case the WireGuard peer (mostly server) adding or removing another peers from its configuration and wants to reload it without stopping any active sessions, one can execute the following command to do it:<br />
<br />
# wg syncconf ${WGNET} <(wg-quick strip ${WGNET})<br />
<br />
Where {{ic|$WGNET}} is WireGuard interface name or configuration base name, for example {{ic|wg0}} (for server) or {{ic|client}} (without the ''.conf'' extension, for client).<br />
<br />
{{Expansion|Show how to do this with other network managers from [[#Persistent configuration]].}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== Routes are periodically reset ===<br />
<br />
Users of [[NetworkManager]] should make sure that it [[NetworkManager#Ignore specific devices|is not managing]] the WireGuard interface(s). For example, create the following configuration file:<br />
<br />
{{hc|/etc/NetworkManager/conf.d/unmanaged.conf|2=<br />
[keyfile]<br />
unmanaged-devices=type:wireguard<br />
}}<br />
<br />
=== Broken DNS resolution ===<br />
<br />
When tunneling all traffic through a WireGuard interface, the connection can become seemingly lost after a while or upon new connection. This could be caused by a [[network manager]] or [[DHCP]] client overwriting {{ic|/etc/resolv.conf}}.<br />
<br />
By default ''wg-quick'' uses ''resolvconf'' to register new [[DNS]] entries (from the {{ic|DNS}} keyword in the configuration file). This will cause issues with [[network manager]]s and [[DHCP]] clients that do not use ''resolvconf'', as they will overwrite {{ic|/etc/resolv.conf}} thus removing the DNS servers added by wg-quick.<br />
<br />
The solution is to use networking software that supports [[resolvconf]].<br />
<br />
{{Note|Users of [[systemd-resolved]] should make sure that {{Pkg|systemd-resolvconf}} is [[install]]ed.}}<br />
<br />
Users of [[NetworkManager]] should know that it does not use resolvconf by default. It is recommended to use [[systemd-resolved]]. If this is undesirable, [[install]] {{Pkg|openresolv}} and configure NetworkManager to use it: [[NetworkManager#Use openresolv]].<br />
<br />
=== Low MTU ===<br />
<br />
Due to too low MTU (lower than 1280), wg-quick may have failed to create the WireGuard interface. This can be solved by setting the MTU value in WireGuard configuration in Interface section on client.<br />
{{hc|foo.config|2=<br />
[Interface]<br />
Address = 10.200.200.2/24<br />
MTU = 1420<br />
PrivateKey = ''PEER_FOO_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
}}<br />
<br />
=== Key is not the correct length or format ===<br />
<br />
To avoid the following error, put the key value in the configuration file and not the path to the key file.<br />
<br />
{{hc|# wg-quick up wg0|<br />
[#] ip link add wg0 type wireguard<br />
[#] wg setconf wg0 /dev/fd/63<br />
Key is not the correct length or format: `''/path/example.key'''<br />
Configuration parsing error<br />
[#] ip link delete dev wg0<br />
}}<br />
<br />
=== Unable to establish a persistent connection behind NAT / firewall ===<br />
<br />
By default, WireGuard peers remain silent while they do not need to communicate, so peers located behind a NAT and/or [[firewall]] may be unreachable from other peers until they reach out to other peers themselves (or the connection may time out). Adding {{ic|1=PersistentKeepalive = 25}} to the {{ic|[Peer]}} settings of a peer located behind a NAT and/or firewall can ensure that the connection remains open.<br />
<br />
{{hc|# Set the persistent-keepalive via command line (temporarily)|<br />
[#] wg set wg0 peer $PUBKEY persistent-keepalive 25<br />
}}<br />
<br />
=== Loop routing ===<br />
<br />
Adding the endpoint IP to the allowed IPs list, the kernel will attempt to send handshakes to said device binding, rather than using the original route. This results in failed handshake attempts.<br />
<br />
As a workaround, the correct route to the endpoint needs to be manually added using<br />
<br />
ip route add <endpoint ip> via <gateway> dev <network interface><br />
<br />
e.g. for peer B from above in a standard LAN setup:<br />
<br />
ip route add 203.0.113.102 via 192.168.0.1 dev eth0<br />
<br />
To make this route persistent, the command can be added as {{ic|1=PostUp = ip route ...}} to the {{ic|[Interface]}} section of {{ic|wg0.conf}}. However, on certain setups (e.g. using {{ic|wg-quick@.service}} in combination with NetworkManager) this might fail on resume. Furthermore, this only works for a static network setup and fails if gateways or devices change (e.g. using ethernet or wifi on a laptop).<br />
<br />
Using NetworkManager, a more flexible solution is to start WireGuard using a dispatcher script. As root, create<br />
{{hc|/etc/NetworkManager/dispatcher.d/50-wg0.sh|2=<br />
#!/bin/sh<br />
case $2 in<br />
up)<br />
wg-quick up wg0<br />
ip route add <endpoint ip> via $IP4_GATEWAY dev $DEVICE_IP_IFACE<br />
;;<br />
pre-down)<br />
wg-quick down wg0<br />
;;<br />
esac<br />
}}<br />
If not already running, start and enable {{ic|NetworkManager-dispatcher.service}}.<br />
Also, make sure that NetworkManager is not managing routes for {{ic|wg0}} ([[#Routes are periodically reset|see above]]).<br />
<br />
== See also ==<br />
<br />
* [[Wikipedia:WireGuard]]<br />
* [https://www.wireguard.com/presentations/ Presentations by Jason Donenfeld].<br />
* [https://lists.zx2c4.com/mailman/listinfo/wireguard Mailing list]<br />
* [https://docs.sweeting.me/s/wireguard Unofficial WireGuard Documentation]<br />
* [[Debian:Wireguard]]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=WireGuard&diff=717548WireGuard2022-02-09T18:57:04Z<p>Cmsigler: Removed "[Interface] MTU = 1420" lines from example wg-quick conf files; this setting caused wg VPN to fail over cellular network connection for me -- Note that this is still listed under the "Low MTU" subsection.</p>
<hr />
<div>[[Category:Virtual Private Network]]<br />
[[ja:WireGuard]]<br />
[[zh-hans:WireGuard]]<br />
From the [https://www.wireguard.com/ WireGuard] project homepage:<br />
:WireGuard is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable.<br />
<br />
A rough introduction to the main concepts used in this article can be found on [https://www.wireguard.com/ WireGuard's] project homepage. WireGuard is in the Linux kernel since March 2020.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|wireguard-tools}} package for userspace utilities.<br />
<br />
Alternatively, various network managers provide support for WireGuard, provided that peer keys are available. See [[#Persistent configuration]] for details.<br />
<br />
=== Graphical clients ===<br />
<br />
* {{App|Qomui|OpenVPN GUI with advanced features and support for multiple providers.|https://github.com/corrad1nho/qomui|{{AUR|qomui}}}}<br />
<br />
== Usage ==<br />
<br />
{{Style|Useless section name – everything on this page is about WireGuard usage. Moving the 4 subsections to the top level would make sense.}}<br />
<br />
The commands below demonstrate how to set up a basic tunnel between two or more peers with the following settings:<br />
<br />
{| class="wikitable"<br />
! rowspan="2" |<br />
! colspan="3" | External (public) addresses<br />
! colspan="2" | Internal IP addresses<br />
! rowspan="2" | Port<br />
|-<br />
! Domain name<br />
! IPv4 address<br />
! IPv6 address<br />
! IPv4 address<br />
! IPv6 address<br />
|-<br />
! Peer A<br />
| <br />
| 198.51.100.101<br />
| 2001:db8:a85b:70a:ffd4:ec1b:4650:a001<br />
| 10.0.0.1/24<br />
| fdc9:281f:04d7:9ee9::1/64<br />
| UDP/51871<br />
|-<br />
! Peer B<br />
| peer-b.example<br />
| 203.0.113.102<br />
| 2001:db8:40f0:147a:80ad:3e88:f8e9:b002<br />
| 10.0.0.2/24<br />
| fdc9:281f:04d7:9ee9::2/64<br />
| UDP/51902<br />
|-<br />
! Peer C<br />
| <br />
| ''dynamic''<br />
| ''dynamic''<br />
| 10.0.0.3/24<br />
| fdc9:281f:04d7:9ee9::3/64<br />
| UDP/51993<br />
|}<br />
<br />
{{Tip|The same UDP port can be used for all peers.}}<br />
<br />
The external addresses should already exist. For example, if ICMP echo requests are not blocked, peer A should be able to [[ping]] peer B via its public IP address(es) and vice versa.<br />
<br />
The internal addresses will be new addresses, created either manually using the {{man|8|ip}} utility or by network management software, which will be used internally within the new WireGuard network. The following examples will use 10.0.0.0/24 and fdc9:281f:04d7:9ee9::/64 as the internal network. The {{ic|/24}} and {{ic|/64}} in the IP addresses is the [[Wikipedia:Classless Inter-Domain Routing#CIDR notation|CIDR]].<br />
<br />
=== Key generation ===<br />
<br />
Create a private and public key for each peer. If connecting dozens of peers optionally consider a vanity keypair to personalize the Base64 encoded public key string. See [[#Vanity keys]].<br />
<br />
To create a private key run:<br />
<br />
$ (umask 0077; wg genkey > peer_A.key)<br />
<br />
{{Note|It is recommended to only allow reading and writing access for the owner. The above alters the [[umask]] temporarily within a sub-shell to ensure that access (read/write permissions) is restricted to the owner.}}<br />
<br />
To create a public key:<br />
<br />
$ wg pubkey < peer_A.key > peer_A.pub<br />
<br />
Alternatively, do this all at once:<br />
<br />
$ wg genkey | (umask 0077 && tee peer_A.key) | wg pubkey > peer_A.pub<br />
<br />
One can also generate a pre-shared key to add an additional layer of symmetric-key cryptography to be mixed into the already existing public-key cryptography, for post-quantum resistance. A pre-shared key should be generated for each peer pair and should not be reused. For example, three interconnected peers, A, B, and, C will need three separate pre-shared keys, one for each peer pair.<br />
<br />
Generate a pre-shared key for each peer pair using the following command:<br />
<br />
$ wg genpsk > peer_A-peer_B.psk<br />
$ wg genpsk > peer_A-peer_C.psk<br />
$ wg genpsk > peer_B-peer_C.psk<br />
<br />
==== Vanity keys ====<br />
<br />
Currently, WireGuard does not support comments or attaching human-memorable names to keys. This makes identifying the key's owner difficult particularly when multiple keys are in use. One solution is to generate a public key that contains some familiar characters (perhaps the first few letters of the owner's name or of the hostname etc.), {{AUR|wireguard-vanity-address}} does this.<br />
<br />
=== Manual configuration ===<br />
<br />
==== Peer setup ====<br />
<br />
Manual setup is accomplished by using {{man|8|ip}} and {{man|8|wg}}.<br />
<br />
{{Style|These examples use the pre-shared keys which were introduced as ''optional'' in [[#Key generation]].}}<br />
<br />
'''Peer A setup:'''<br />
<br />
In this example peer A will listen on UDP port 51871 and will accept connection from peer B and C.<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.1/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::1/64 dev wg0<br />
# wg set wg0 listen-port 51871 private-key ''/path/to/''peer_A.key<br />
# wg set wg0 peer ''PEER_B_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_B.psk endpoint peer-b.example:51902 allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128<br />
# wg set wg0 peer ''PEER_C_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_C.psk allowed-ips 10.0.0.3/32,fdc9:281f:04d7:9ee9::3/128<br />
# ip link set wg0 up<br />
<br />
{{ic|''PEER_X_PUBLIC_KEY''}} should be the contents of {{ic|1=''peer_X''.pub}}.<br />
<br />
The keyword {{ic|allowed-ips}} is a list of addresses that will get routed to the peer. Make sure to specify at least one address range that contains the WireGuard connection's internal IP address(es).<br />
<br />
'''Peer B setup:'''<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.2/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::2/64 dev wg0<br />
# wg set wg0 listen-port 51902 private-key ''/path/to/''peer_B.key<br />
# wg set wg0 peer ''PEER_A_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_B.psk endpoint 198.51.100.101:51871 allowed-ips 10.0.0.1/32,fdc9:281f:04d7:9ee9::1/128<br />
# wg set wg0 peer ''PEER_C_PUBLIC_KEY'' preshared-key ''/path/to/''peer_B-peer_C.psk allowed-ips 10.0.0.3/32,fdc9:281f:04d7:9ee9::3/128<br />
# ip link set wg0 up<br />
<br />
'''Peer C setup:'''<br />
<br />
# ip link add dev wg0 type wireguard<br />
# ip addr add 10.0.0.3/24 dev wg0<br />
# ip addr add fdc9:281f:04d7:9ee9::3/64 dev wg0<br />
# wg set wg0 listen-port 51993 private-key ''/path/to/''peer_C.key<br />
# wg set wg0 peer ''PEER_A_PUBLIC_KEY'' preshared-key ''/path/to/''peer_A-peer_C.psk endpoint 198.51.100.101:51871 allowed-ips 10.0.0.1/32,fdc9:281f:04d7:9ee9::1/128<br />
# wg set wg0 peer ''PEER_B_PUBLIC_KEY'' preshared-key ''/path/to/''peer_B-peer_C.psk endpoint peer-b.example:51902 allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128<br />
# ip link set wg0 up<br />
<br />
==== Additional routes ====<br />
<br />
To establish connections more complicated than point-to-point, additional setup is necessary.<br />
<br />
{{Expansion|Add a scenario: only peer A has a public IP address (i.e. ''endpoint''), peers B and C (which are generally behind a NAT) connect to peer A with {{ic|PersistentKeepalive}}, connections from peer B to peer C and vice versa are routed via peer A. Configuration: peers B and C have {{ic|10.0.0.0/24}} in {{ic|AllowedIPs}} for peer A, peer A must enable packet forwarding and masquerading via firewall rules, e.g. {{ic|iptables -A FORWARD -i wg+ -j ACCEPT}} and {{ic|iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wg0 -j MASQUERADE}}.}}<br />
<br />
===== Point-to-site =====<br />
<br />
To access the network of a peer, specify the network subnet(s) in {{ic|allowed-ips}} in the configuration of the peers who should be able to connect to it. E.g. {{ic|allowed-ips 10.0.0.2/32,fdc9:281f:04d7:9ee9::2/128,'''192.168.35.0/24,fd7b:d0bd:7a6e::/64'''}}.<br />
<br />
Make sure to also set up the [[Network configuration#Routing table|routing table]] with {{man|8|ip-route}}. E.g.:<br />
<br />
# ip route add 192.168.35.0/24 dev wg0<br />
# ip route add fd7b:d0bd:7a6e::/64 dev wg0<br />
<br />
===== Site-to-point =====<br />
<br />
{{Expansion|Add {{ic|ip route}} examples; add alternative using NAT; mention the situation when the ''site''-peer is the network's gateway.}}<br />
<br />
If the intent is to connect a device to a network with WireGuard peer(s), set up routes on each device so they know that the peer(s) are reachable via the device.<br />
<br />
{{Tip|Deploy routes network-wide by configuring them in the router.}}<br />
<br />
Enable IP forwarding on the peer through which other devices on the network will connect to WireGuard peer(s):<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
# sysctl -w net.ipv6.conf.all.forwarding=1<br />
<br />
{{Warning|Enabling IP forwarding without a properly configured [[firewall]] is a security risk.}}<br />
<br />
See [[sysctl#Configuration]] for instructions on how to set the ''sysctl'' parameters on boot.<br />
<br />
===== Site-to-site =====<br />
<br />
To connect two (or more) networks, apply both [[#Point-to-site]] and [[#Site-to-point]] on all ''sites''.<br />
<br />
===== Routing all traffic over WireGuard =====<br />
<br />
{{Expansion|Add instructions on how to ''route everything over VPN''.[https://www.wireguard.com/netns/] There is [[#systemd-networkd: routing all traffic over WireGuard]] already.}}<br />
<br />
==== DNS ====<br />
<br />
To use a peer as a DNS server, add its WireGuard tunnel IP address(es) to [[:/etc/resolv.conf]]. For example, to use peer B as the DNS server:<br />
<br />
{{hc|/etc/resolv.conf|<br />
nameserver fdc9:281f:04d7:9ee9::2<br />
nameserver 10.0.0.2<br />
}}<br />
<br />
{{Note|If a peer will act as a DNS server, make sure to use its WireGuard tunnel address(es) as the DNS server address(es) instead of another of its addresses from allowed IPs. Otherwise DNS lookups may fail.}}<br />
<br />
=== Basic checkups ===<br />
<br />
Invoking the {{man|8|wg}} command without parameters will give a quick overview of the current configuration.<br />
<br />
As an example, when peer A has been configured we are able to see its identity and its associated peers:<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51871<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
endpoint: 203.0.113.102:51902<br />
allowed ips: 10.0.0.2/32, fdc9:281f:04d7:9ee9::2<br />
<br />
peer: 2RzKFbGMx5g7fG0BrWCI7JIpGvcwGkqUaCoENYueJw4=<br />
endpoint: 192.0.2.103:51993<br />
allowed ips: 10.0.0.3/32, fdc9:281f:04d7:9ee9::3<br />
}}<br />
<br />
At this point one could reach the end of the tunnel. If the peers do not block ICMP echo requests, try [[ping]]ing a peer to test the connection between them.<br />
<br />
Using ICMPv4:<br />
<br />
$ ping 10.0.0.2<br />
<br />
Using ICMPv6:<br />
<br />
$ ping fdc9:281f:04d7:9ee9::2<br />
<br />
After transferring some data between peers, the {{ic|wg}} utility will show additional information:<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51871<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
endpoint: 203.0.113.102:51902<br />
allowed ips: 10.0.0.2/32, fdc9:281f:04d7:9ee9::2<br />
latest handshake: 5 seconds ago<br />
transfer: 1.24 KiB received, 1.38 KiB sent<br />
<br />
peer: 2RzKFbGMx5g7fG0BrWCI7JIpGvcwGkqUaCoENYueJw4=<br />
allowed ips: 10.0.0.3/32, fdc9:281f:04d7:9ee9::3<br />
}}<br />
<br />
=== Persistent configuration ===<br />
<br />
Persistent configuration can be achieved using {{ic|wg-quick@.service}}, which is shipped with {{Pkg|wireguard-tools}}, or using a network manager. Network managers that support WireGuard are [[systemd-networkd]], [[netctl]][https://gitlab.archlinux.org/archlinux/netctl/blob/master/docs/examples/wireguard], [[NetworkManager]] and [[ConnMan]][https://git.kernel.org/pub/scm/network/connman/connman.git/tree/doc/vpn-config-format.txt].<br />
<br />
{{Note|1=<nowiki></nowiki><br />
* [[netctl]] relies on {{man|8|wg}} from {{Pkg|wireguard-tools}} and {{ic|/etc/wireguard/''interfacename''.conf}} configuration files for establishing WireGuard connections.<br />
* [[ConnMan]] has a very limited support for WireGuard. It can connect to only one peer.[https://git.kernel.org/pub/scm/network/connman/connman.git/commit/?id=95b25140bec7c4d9b6ae4e479dc1b94b7d409b39]<br />
}}<br />
<br />
==== wg-quick ====<br />
<br />
{{man|8|wg-quick}} configures WireGuard tunnels using configuration files from {{ic|/etc/wireguard/''interfacename''.conf}}.<br />
<br />
The current WireGuard configuration can be saved by utilizing the {{man|8|wg}} utility's {{ic|showconf}} command. For example:<br />
<br />
# wg showconf wg0 > /etc/wireguard/wg0.conf<br />
<br />
To start a tunnel with a configuration file, use<br />
<br />
# wg-quick up ''interfacename''<br />
<br />
or use the systemd service—{{ic|wg-quick@''interfacename''.service}}. To start the tunnel at boot, [[enable]] the unit.<br />
<br />
{{Note|<br />
* Users configuring the WireGuard interface using ''wg-quick'', should make sure that no other [[network management]] software tries to manage it. To use [[NetworkManager]] and to not configure WireGuard interfaces with it, see [[#Routes are periodically reset]].<br />
* ''wg-quick'' adds additional configuration options to the configuration file format thus making it incompatible with {{man|8|wg|CONFIGURATION FILE FORMAT}}. See the {{man|8|wg-quick|CONFIGURATION}} man page for the configuration values in question. A ''wg''-compatible configuration file can be produced by using {{ic|wg-quick strip}}.<br />
* ''wg-quick'' does not provide a way to instruct [[resolvconf]] to set the WireGuard interface as ''private''. Even if there are search domains specified, all DNS queries from the system, not just those that match the search domains, will be sent to the DNS servers which are set in the WireGuard configuration.<br />
}}<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.1/24, fdc9:281f:04d7:9ee9::1/64<br />
ListenPort = 51871<br />
PrivateKey = ''PEER_A_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.2/32, fdc9:281f:04d7:9ee9::2/128<br />
Endpoint = peer-b.example:51902<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.3/32, fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
* To ''route all traffic'' through the tunnel to a specific peer, add the [[Wikipedia:Default route|default route]] ({{ic|0.0.0.0/0}} for IPv4 and {{ic|::/0}} for IPv6) to {{ic|AllowedIPs}}. E.g. {{ic|1=AllowedIPs = 0.0.0.0/0, ::/0}}. wg-quick will automatically take care of setting up correct routing and fwmark[https://www.wireguard.com/netns/#routing-all-your-traffic] so that networking still functions.<br />
* To use a peer as a DNS server, set {{ic|1=DNS = ''wireguard_internal_ip_address_of_peer''}} in the {{ic|[Interface]}} section. [[Wikipedia:Search domain|Search domains]] are also set with the {{ic|1=DNS = }} option. Separate all values in the list with commas.<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.2/24, fdc9:281f:04d7:9ee9::2/64<br />
ListenPort = 51902<br />
PrivateKey = ''PEER_B_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.1/32, fdc9:281f:04d7:9ee9::1/128<br />
Endpoint = 198.51.100.101:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
PresharedKey = ''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.3/32, fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|1=/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.0.0.3/24, fdc9:281f:04d7:9ee9::3/64<br />
ListenPort = 51993<br />
PrivateKey = ''PEER_C_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
PresharedKey = ''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.1/32, fdc9:281f:04d7:9ee9::1/128<br />
Endpoint = 198.51.100.101:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
PresharedKey = ''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs = 10.0.0.2/32, fdc9:281f:04d7:9ee9::2/128<br />
Endpoint = peer-b.example:51902<br />
}}<br />
<br />
==== systemd-networkd ====<br />
<br />
[[systemd-networkd]] has native support for setting up WireGuard interfaces. An example is provided in the {{man|5|systemd.netdev|EXAMPLES}} man page.<br />
<br />
{{Note|Routing all DNS over WireGuard (i.e. {{ic|1=Domains=~.}}) will prevent the DNS resolution of endpoints.}}<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=''PEER_A_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::2/128<br />
Endpoint=peer-b.example:51902<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_C_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.3/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
Address=fdc9:281f:04d7:9ee9::1/64<br />
}}<br />
<br />
* To use a peer as a DNS server, specify its WireGuard tunnel's IP address(es) in the ''.network'' file using the {{ic|1=DNS=}} option. For [[Wikipedia:Search domain|search domains]] use the {{ic|1=Domains=}} option. See {{man|5|systemd.network|[NETWORK] SECTION OPTIONS}} for details.<br />
* To use a peer as the '''only''' DNS server, then in the ''.network'' file's {{ic|[Network]}} section set {{ic|1=DNSDefaultRoute=true}} and add {{ic|~.}} to {{ic|1=Domains=}} option.<br />
* To route additional subnets add them as {{ic|[Route]}} sections in the ''.network'' file. For example:<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
...<br />
[Route]<br />
Destination=192.168.35.0/24<br />
Scope=link<br />
<br />
[Route]<br />
Destination=fd7b:d0bd:7a6e::/64<br />
Scope=link<br />
}}<br />
<br />
{{Warning|In order to prevent the leaking of private keys, it is recommended to set the permissions of the ''.netdev'' file:<br />
<br />
# chown root:systemd-network /etc/systemd/network/99-*.netdev<br />
# chmod 0640 /etc/systemd/network/99-*.netdev<br />
<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=''PEER_B_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.1/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::1/128<br />
Endpoint=198.51.100.101:51871<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_C_PUBLIC_KEY''<br />
PresharedKey=''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.3/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::3/128<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/24<br />
Address=fdc9:281f:04d7:9ee9::2/64<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51993<br />
PrivateKey=''PEER_C_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.1/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::1/128<br />
Endpoint=198.51.100.101:51871<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_B-PEER_C-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
AllowedIPs=fdc9:281f:04d7:9ee9::2/128<br />
Endpoint=peer-b.example:51902<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.3/24<br />
Address=fdc9:281f:04d7:9ee9::3/64<br />
}}<br />
<br />
==== systemd-networkd: routing all traffic over WireGuard ====<br />
<br />
In this example Peer B connects to peer A with public IP address. Peer B routes all its traffic over WireGuard tunnel and uses Peer A for handling DNS requests.<br />
<br />
'''Peer A setup'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51871<br />
PrivateKey=''PEER_A_PRIVATE_KEY''<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_B_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=10.0.0.2/32<br />
}}<br />
<br />
{{hc|/etc/systemd/network/99-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.1/24<br />
}}<br />
<br />
{{Note|You must still enable [[Internet sharing#Enable packet forwarding|IP Forwarding]] and IP masquerading rules on Peer A in order to provide working internet to Peer B.<br />
<br />
Assumes [[ufw]], but you could do the same with [[iptables]] by using the rules outlined in the [[#Server config|Server config]] section:<br />
<br />
$ ufw route allow in on wg0 out on enp5s0<br />
<br />
{{hc|/etc/ufw/before.rules|2=<br />
*nat<br />
:POSTROUTING ACCEPT [0:0]<br />
-A POSTROUTING -s 10.0.0.0/24 -o enp5s0 -j MASQUERADE<br />
COMMIT<br />
}}<br />
<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/systemd/network/99-wg0.netdev|2=<br />
[NetDev]<br />
Name=wg0<br />
Kind=wireguard<br />
Description=WireGuard tunnel wg0<br />
<br />
[WireGuard]<br />
ListenPort=51902<br />
PrivateKey=''PEER_B_PRIVATE_KEY''<br />
FirewallMark=0x8888<br />
<br />
[WireGuardPeer]<br />
PublicKey=''PEER_A_PUBLIC_KEY''<br />
PresharedKey=''PEER_A-PEER_B-PRESHARED_KEY''<br />
AllowedIPs=0.0.0.0/0<br />
Endpoint=198.51.100.101:51871<br />
}}<br />
<br />
{{hc|/etc/systemd/network/50-wg0.network|2=<br />
[Match]<br />
Name=wg0<br />
<br />
[Network]<br />
Address=10.0.0.2/24<br />
DNS=10.0.0.1<br />
DNSDefaultRoute=true<br />
Domains=~.<br />
<br />
[RoutingPolicyRule]<br />
FirewallMark=0x8888<br />
InvertRule=true<br />
Table=1000<br />
Priority=10<br />
<br />
[Route]<br />
Gateway=10.0.0.1<br />
GatewayOnLink=true<br />
Table=1000<br />
}}<br />
<br />
==== Netctl ====<br />
<br />
[[Netctl]] has native support for setting up WireGuard interfaces. A typical set of WireGuard netctl profile configuration files would look like this:<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer A"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.1/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51871<br />
PrivateKey = ''PEER_A_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.2/32<br />
Endpoint = peer-b.example:51902<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.3/32<br />
}}<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer B"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.2/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51902<br />
PrivateKey = ''PEER_B_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.1/32<br />
Endpoint = peer-a.example:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_C_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.3/32<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/netctl/wg0|2=<br />
Description="WireGuard tunnel on peer C"<br />
Interface=wg0<br />
Connection=wireguard<br />
WGConfigFile=/etc/wireguard/wg0.conf<br />
<br />
IP=static<br />
Address=('10.0.0.3/24')<br />
}}<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
ListenPort = 51993<br />
PrivateKey = ''PEER_C_PRIVATE_KEY''<br />
<br />
[Peer]<br />
PublicKey = ''PEER_A_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.1/32<br />
Endpoint = peer-a.example:51871<br />
<br />
[Peer]<br />
PublicKey = ''PEER_B_PUBLIC_KEY''<br />
AllowedIPs = 10.0.0.2/32<br />
Endpoint = peer-b.example:51902<br />
}}<br />
<br />
Then start and/or enable wg0 interface on every participating peer as needed, ie.<br />
<br />
# netctl start wg0<br />
<br />
To implement persistent site-to-peer, peer-to-site or site-to-site type of connection with WireGuard and Netctl, just add appropriate {{ic|1=Routes=}} line into the netctl profile config file and add this network to {{ic|AllowedIPs}} in the WireGuard profile, eg. {{ic|1=Routes=('192.168.10.0/24 dev wg0')}} in the {{ic|/etc/netctl/wg0}} and {{ic|1=AllowedIPs=10.0.0.1/32, 192.168.10.0/24}} in {{ic|/etc/wireguard/wg0.conf}} and then do not forget to enable [[Internet sharing#Enable packet forwarding|IP forwarding]].<br />
<br />
==== NetworkManager ====<br />
<br />
[[NetworkManager]] has native support for setting up WireGuard interfaces. For all details about WireGuard usage in NetworkManager, read Thomas Haller's blog post—[https://blogs.gnome.org/thaller/2019/03/15/wireguard-in-networkmanager/ WireGuard in NetworkManager].<br />
<br />
{{Tip|NetworkManager can import a wg-quick configuration file. E.g.: {{bc|# nmcli connection import type wireguard file /etc/wireguard/wg0.conf}}}}<br />
<br />
{{Note|nmcli can create a WireGuard connection profile, but it does not support configuring peers. See [https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/358 NetworkManager issue 358].}}<br />
<br />
The following examples configure WireGuard via the keyfile format ''.nmconnection'' files. See {{man|5|nm-settings-keyfile}} and {{man|5|nm-settings}} for an explanation on the syntax and available options.<br />
<br />
'''Peer A setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51871<br />
private-key=''PEER_A_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_B_PUBLIC_KEY'']<br />
endpoint=peer-b.example:51902<br />
preshared-key=''PEER_A-PEER_B-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.2/32;fdc9:281f:04d7:9ee9::2/128;<br />
<br />
[wireguard-peer.''PEER_C_PUBLIC_KEY'']<br />
preshared-key=''PEER_A-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.3/32;fdc9:281f:04d7:9ee9::3/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.1/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::1/64<br />
method=manual<br />
}}<br />
<br />
* To ''route all traffic'' through the tunnel to a specific peer, add the [[Wikipedia:Default route|default route]] ({{ic|0.0.0.0/0}} for IPv4 and {{ic|::/0}} for IPv6) to {{ic|wireguard-peer.''PEER_X_PUBLIC_KEY''.allowed-ips}}. E.g. {{ic|1=wireguard-peer.''PEER_B_PUBLIC_KEY''.allowed-ips=0.0.0.0/0;::/0;}}. Special handling of the default route in WireGuard connections is supported since NetworkManager 1.20.0.<br />
* To use a peer as a DNS server, specify its WireGuard tunnel's IP address(es) with the {{ic|ipv4.dns}} and {{ic|ipv6.dns}} settings. [[Wikipedia:Search domain|Search domains]] can be specified with the {{ic|1=ipv4.dns-search=}} and {{ic|1=ipv6.dns-search=}} options. See {{man|5|nm-settings}} for more details. For example, using the keyfile format:<br />
<br />
{{bc|1=<br />
...<br />
[ipv4]<br />
...<br />
dns=10.0.0.2;<br />
dns-search=corp;<br />
...<br />
[ipv6]<br />
...<br />
dns=fdc9:281f:04d7:9ee9::2;<br />
dns-search=corp;<br />
...<br />
}}<br />
<br />
To use a peer as the '''only''' DNS server, set a negative DNS priority (e.g. {{ic|1=dns-priority=-1}}) and add {{ic|~.}} to the {{ic|1=dns-search=}} settings.<br />
<br />
'''Peer B setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51902<br />
private-key=''PEER_B_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_A_PUBLIC_KEY'']<br />
endpoint=198.51.100.101:51871<br />
preshared-key=''PEER_A-PEER_B-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.1/32;fdc9:281f:04d7:9ee9::1/128;<br />
<br />
[wireguard-peer.''PEER_C_PUBLIC_KEY'']<br />
preshared-key=''PEER_B-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.3/32;fdc9:281f:04d7:9ee9::3/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.2/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::2/64<br />
method=manual<br />
}}<br />
<br />
'''Peer C setup:'''<br />
<br />
{{hc|/etc/NetworkManager/system-connections/wg0.nmconnection|2=<br />
[connection]<br />
id=wg0<br />
type=wireguard<br />
interface-name=wg0<br />
<br />
[wireguard]<br />
listen-port=51993<br />
private-key=''PEER_C_PRIVATE_KEY''<br />
private-key-flags=0<br />
<br />
[wireguard-peer.''PEER_A_PUBLIC_KEY'']<br />
endpoint=198.51.100.101:51871<br />
preshared-key=''PEER_A-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.1/32;fdc9:281f:04d7:9ee9::1/128;<br />
<br />
[wireguard-peer.''PEER_B_PUBLIC_KEY'']<br />
endpoint=peer-b.example:51902<br />
preshared-key=''PEER_B-PEER_C-PRESHARED_KEY''<br />
preshared-key-flags=0<br />
allowed-ips=10.0.0.2/32;fdc9:281f:04d7:9ee9::2/128;<br />
<br />
[ipv4]<br />
address1=10.0.0.3/24<br />
method=manual<br />
<br />
[ipv6]<br />
address1=fdc9:281f:04d7:9ee9::3/64<br />
method=manual<br />
}}<br />
<br />
== Specific use-case: VPN server ==<br />
<br />
{{Merge|#Routing all traffic over WireGuard|Same use case.}}<br />
<br />
{{Note|Usage of the terms "server" and "client" were purposefully chosen in this section specifically to help new users/existing OpenVPN users become familiar with the construction of WireGuard's configuration files. WireGuard documentation simply refers to both of these concepts as "peers."}}<br />
<br />
The purpose of this section is to set up a WireGuard "server" and generic "clients" to enable access to the server/network resources through an encrypted and secured tunnel like [[OpenVPN]] and others. The "server" runs on Linux and the "clients" can run on any number of platforms (the WireGuard Project offers apps on both iOS and Android platforms in addition to Linux, Windows and MacOS). See the official project [https://www.wireguard.com/install/ install link] for more.<br />
<br />
{{Tip|Instead of using {{pkg|wireguard-tools}} for server/client configuration, one may also use [[#systemd-networkd|systemd-networkd]] native WireGuard support.}}<br />
<br />
=== Server ===<br />
<br />
{{Merge|#Site-to-point|Same use case.}}<br />
<br />
On the peer that will act as the "server", first enable IPv4 forwarding using [[sysctl]]:<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, add {{ic|1=net.ipv4.ip_forward = 1}} to {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
A properly configured [[firewall]] is ''HIGHLY recommended'' for any Internet-facing device.<br />
<br />
If the server has a public IP configured, be sure to:<br />
<br />
* Allow UDP traffic on the specified port(s) on which WireGuard will be running (for example allowing traffic on {{ic|51820/UDP}}).<br />
* Setup the forwarding policy for the firewall if it is not included in the WireGuard config for the interface itself {{ic|/etc/wireguard/wg0.conf}}. The example below should have the iptables rules and work as-is.<br />
<br />
If the server is behind NAT, be sure to forward the specified port(s) on which WireGuard will be running (for example, {{ic|51820/UDP}}) from the router to the WireGuard server.<br />
<br />
=== Key generation ===<br />
<br />
Generate key pairs for the server and for each client as explained in [[#Key generation]].<br />
<br />
=== Server config ===<br />
<br />
Create the "server" config file:<br />
<br />
{{hc|/etc/wireguard/wg0.conf|2=<br />
[Interface]<br />
Address = 10.200.200.1/24<br />
ListenPort = 51820<br />
PrivateKey = ''SERVER_PRIVATE_KEY''<br />
<br />
# substitute ''eth0'' in the following lines to match the Internet-facing interface<br />
# if the server is behind a router and receives traffic via NAT, these iptables rules are not needed<br />
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE<br />
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE<br />
<br />
[Peer]<br />
# foo<br />
PublicKey = ''PEER_FOO_PUBLIC_KEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
AllowedIPs = 10.200.200.2/32<br />
<br />
[Peer]<br />
# bar<br />
PublicKey = ''PEER_BAR_PUBLIC_KEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
AllowedIPs = 10.200.200.3/32<br />
}}<br />
<br />
Additional peers ("clients") can be listed in the same format as needed. Each peer requires the {{ic|PublicKey}} to be set. However, specifying {{ic|PresharedKey}} is optional.<br />
<br />
Notice that the {{ic|Address}} has a netmask of {{ic|/24}} and the clients on {{ic|AllowedIPs}} {{ic|/32}}. The clients only use their IP and the server only sends back their respective address.<br />
<br />
The interface can be managed manually using {{man|8|wg-quick}} or using a [[systemd]] service managed via {{man|1|systemctl}}.<br />
<br />
The interface may be brought up using {{ic|wg-quick up wg0}} respectively by [[start|starting]] and potentially [[enable|enabling]] the interface via {{ic|wg-quick@''interface''.service}}, e.g. {{ic|wg-quick@wg0.service}}. To close the interface use {{ic|wg-quick down wg0}} respectively [[stop]] {{ic|wg-quick@''interface''.service}}.<br />
<br />
=== Client config ===<br />
<br />
Create the corresponding "client" config file(s):<br />
<br />
{{hc|foo.conf|2=<br />
[Interface]<br />
Address = 10.200.200.2/32<br />
PrivateKey = ''PEER_FOO_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
<br />
[Peer]<br />
PublicKey = ''SERVER_PUBLICKEY''<br />
PresharedKey = ''PRE-SHARED_KEY''<br />
Endpoint = my.ddns.example.com:51820<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
}}<br />
<br />
{{hc|bar.conf|2=<br />
[Interface]<br />
Address = 10.200.200.3/32<br />
PrivateKey = ''PEER_BAR_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
<br />
[Peer]<br />
PublicKey = ''SERVER_PUBLICKEY''<br />
PresharedKey = ''PRE-SHARED KEY''<br />
Endpoint = my.ddns.example.com:51820<br />
AllowedIPs = 0.0.0.0/0, ::/0<br />
}}<br />
<br />
Using the catch-all {{ic|1=AllowedIPs = 0.0.0.0/0, ::/0}} will forward all IPv4 ({{ic|0.0.0.0/0}}) and IPv6 ({{ic|::/0}}) traffic over the VPN.<br />
<br />
{{Note|Users of [[NetworkManager]], may need to [[enable]] the {{ic|NetworkManager-wait-online.service}} and users of [[systemd-networkd]] may need to [[enable]] the {{ic|systemd-networkd-wait-online.service}} to wait until devices are network-ready before attempting a WireGuard connection.}}<br />
<br />
== Testing the tunnel ==<br />
<br />
{{Merge|#Basic checkups|Same topic.}}<br />
<br />
Once a tunnel has been established, one can use [[netcat]] to send traffic through it to test out throughput, CPU usage, etc.<br />
On one side of the tunnel, run {{ic|nc}} in listen mode and on the other side, pipe some data from {{ic|/dev/zero}} into {{ic|nc}} in sending mode.<br />
<br />
In the example below, port 2222 is used for the traffic (be sure to allow traffic on port 2222 if using a firewall).<br />
<br />
On one side of the tunnel listen for traffic:<br />
<br />
$ nc -vvlnp 2222<br />
<br />
On the other side of the tunnel, send some traffic:<br />
<br />
$ dd if=/dev/zero bs=1024K count=1024 | nc -v 10.0.0.203 2222<br />
<br />
Status can be monitored using {{ic|wg}} directly.<br />
<br />
{{hc|# wg|2=<br />
interface: wg0<br />
public key: UguPyBThx/+xMXeTbRYkKlP0Wh/QZT3vTLPOVaaXTD8=<br />
private key: (hidden)<br />
listening port: 51820<br />
<br />
peer: 9jalV3EEBnVXahro0pRMQ+cHlmjE33Slo9tddzCVtCw=<br />
preshared key: (hidden)<br />
endpoint: 192.168.1.216:53207<br />
allowed ips: 10.0.0.0/0<br />
latest handshake: 1 minutes, 17 seconds ago<br />
transfer: 56.43 GiB received, 1.06 TiB sent<br />
}}<br />
<br />
== Tips and tricks ==<br />
<br />
=== Store private keys in encrypted form ===<br />
<br />
It may be desirable to store private keys in encrypted form, such as through use of {{pkg|pass}}. Just replace the PrivateKey line under [Interface] in the configuration file with:<br />
<br />
PostUp = wg set %i private-key <(su user -c "export PASSWORD_STORE_DIR=/path/to/your/store/; pass WireGuard/private-keys/%i")<br />
<br />
where ''user'' is the Linux username of interest. See the {{man|8|wg-quick}} man page for more details.<br />
<br />
=== Endpoint with changing IP ===<br />
<br />
After resolving a server's domain, WireGuard [https://lists.zx2c4.com/pipermail/wireguard/2017-November/002028.html will not check for changes in DNS again].<br />
<br />
If the WireGuard server is frequently changing its IP-address due DHCP, Dyndns, IPv6, etc., any WireGuard client is going to lose its connection, until its endpoint is updated via something like {{ic|wg set "$INTERFACE" peer "$PUBLIC_KEY" endpoint "$ENDPOINT"}}.<br />
<br />
Also be aware, if the endpoint is ever going to change its address (for example when moving to a new provider/datacenter), just updating DNS will not be enough, so periodically running reresolve-dns might make sense on any DNS-based setup.<br />
<br />
Luckily, {{Pkg|wireguard-tools}} provides an example script {{ic|/usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh}}, that parses WG configuration files and automatically resets the endpoint address.<br />
<br />
One needs to run the {{ic|/usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh /etc/wireguard/wg.conf}} periodically to recover from an endpoint that has changed its IP.<br />
<br />
One way of doing so is by updating all WireGuard endpoints once every thirty seconds[https://git.zx2c4.com/WireGuard/tree/contrib/examples/reresolve-dns/README] via a systemd timer:<br />
<br />
{{hc|/etc/systemd/system/wireguard_reresolve-dns.timer|2=<br />
[Unit]<br />
Description=Periodically reresolve DNS of all WireGuard endpoints<br />
<br />
[Timer]<br />
OnCalendar=*:*:0/30<br />
<br />
[Install]<br />
WantedBy=timers.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/wireguard_reresolve-dns.service|2=<br />
[Unit]<br />
Description=Reresolve DNS of all WireGuard endpoints<br />
Wants=network-online.target<br />
After=network-online.target<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/bin/sh -c 'for i in /etc/wireguard/*.conf; do /usr/share/wireguard-tools/examples/reresolve-dns/reresolve-dns.sh "$i"; done'<br />
}}<br />
<br />
Afterwards [[enable]] and [[start]] {{ic|wireguard_reresolve-dns.timer}}<br />
<br />
=== Generate QR code ===<br />
<br />
If the client is a mobile device such as a phone, {{Pkg|qrencode}} can be used to generate client's configuration QR code and display it in terminal:<br />
<br />
$ qrencode -t ansiutf8 -r ''client.conf''<br />
<br />
=== Enable debug logs ===<br />
<br />
When using the Linux kernel module on a kernel that supports dynamic debugging, debugging information can be written into the kernel ring buffer (viewable with [[dmesg]] and [[journalctl]]) by running:<br />
<br />
# modprobe wireguard<br />
# echo module wireguard +p > /sys/kernel/debug/dynamic_debug/control<br />
<br />
=== Reload peer (server) configuration ===<br />
<br />
In case the WireGuard peer (mostly server) adding or removing another peers from its configuration and wants to reload it without stopping any active sessions, one can execute the following command to do it:<br />
<br />
# wg syncconf ${WGNET} <(wg-quick strip ${WGNET})<br />
<br />
Where {{ic|$WGNET}} is WireGuard interface name or configuration base name, for example {{ic|wg0}} (for server) or {{ic|client}} (without the ''.conf'' extension, for client).<br />
<br />
{{Expansion|Show how to do this with other network managers from [[#Persistent configuration]].}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== Routes are periodically reset ===<br />
<br />
Users of [[NetworkManager]] should make sure that it [[NetworkManager#Ignore specific devices|is not managing]] the WireGuard interface(s). For example, create the following configuration file:<br />
<br />
{{hc|/etc/NetworkManager/conf.d/unmanaged.conf|2=<br />
[keyfile]<br />
unmanaged-devices=type:wireguard<br />
}}<br />
<br />
=== Broken DNS resolution ===<br />
<br />
When tunneling all traffic through a WireGuard interface, the connection can become seemingly lost after a while or upon new connection. This could be caused by a [[network manager]] or [[DHCP]] client overwriting {{ic|/etc/resolv.conf}}.<br />
<br />
By default ''wg-quick'' uses ''resolvconf'' to register new [[DNS]] entries (from the {{ic|DNS}} keyword in the configuration file). This will cause issues with [[network manager]]s and [[DHCP]] clients that do not use ''resolvconf'', as they will overwrite {{ic|/etc/resolv.conf}} thus removing the DNS servers added by wg-quick.<br />
<br />
The solution is to use networking software that supports [[resolvconf]].<br />
<br />
{{Note|Users of [[systemd-resolved]] should make sure that {{Pkg|systemd-resolvconf}} is [[install]]ed.}}<br />
<br />
Users of [[NetworkManager]] should know that it does not use resolvconf by default. It is recommended to use [[systemd-resolved]]. If this is undesirable, [[install]] {{Pkg|openresolv}} and configure NetworkManager to use it: [[NetworkManager#Use openresolv]].<br />
<br />
=== Low MTU ===<br />
<br />
Due to too low MTU (lower than 1280), wg-quick may have failed to create the WireGuard interface. This can be solved by setting the MTU value in WireGuard configuration in Interface section on client.<br />
{{hc|/foo.config|2=<br />
[Interface]<br />
Address = 10.200.200.2/24<br />
MTU = 1420<br />
PrivateKey = ''PEER_FOO_PRIVATE_KEY''<br />
DNS = 10.200.200.1<br />
}}<br />
<br />
=== Key is not the correct length or format ===<br />
<br />
To avoid the following error, put the key value in the configuration file and not the path to the key file.<br />
<br />
{{hc|# wg-quick up wg0|<br />
[#] ip link add wg0 type wireguard<br />
[#] wg setconf wg0 /dev/fd/63<br />
Key is not the correct length or format: `''/path/example.key'''<br />
Configuration parsing error<br />
[#] ip link delete dev wg0<br />
}}<br />
<br />
=== Unable to establish a persistent connection behind NAT / firewall ===<br />
<br />
By default, WireGuard peers remain silent while they do not need to communicate, so peers located behind a NAT and/or [[firewall]] may be unreachable from other peers until they reach out to other peers themselves (or the connection may time out). Adding {{ic|1=PersistentKeepalive = 25}} to the {{ic|[Peer]}} settings of a peer located behind a NAT and/or firewall can ensure that the connection remains open.<br />
<br />
{{hc|# Set the persistent-keepalive via command line (temporarily)|<br />
[#] wg set wg0 peer $PUBKEY persistent-keepalive 25<br />
}}<br />
<br />
=== Loop routing ===<br />
<br />
Adding the endpoint IP to the allowed IPs list, the kernel will attempt to send handshakes to said device binding, rather than using the original route. This results in failed handshake attempts.<br />
<br />
As a workaround, the correct route to the endpoint needs to be manually added using<br />
<br />
ip route add <endpoint ip> via <gateway> dev <network interface><br />
<br />
e.g. for peer B from above in a standard LAN setup:<br />
<br />
ip route add 203.0.113.102 via 192.168.0.1 dev eth0<br />
<br />
To make this route persistent, the command can be added as {{ic|1=PostUp = ip route ...}} to the {{ic|[Interface]}} section of {{ic|wg0.conf}}. However, on certain setups (e.g. using {{ic|wg-quick@.service}} in combination with NetworkManager) this might fail on resume. Furthermore, this only works for a static network setup and fails if gateways or devices change (e.g. using ethernet or wifi on a laptop).<br />
<br />
Using NetworkManager, a more flexible solution is to start WireGuard using a dispatcher script. As root, create<br />
{{hc|/etc/NetworkManager/dispatcher.d/50-wg0.sh|2=<br />
#!/bin/sh<br />
case $2 in<br />
up)<br />
wg-quick up wg0<br />
ip route add <endpoint ip> via $IP4_GATEWAY dev $DEVICE_IP_IFACE<br />
;;<br />
pre-down)<br />
wg-quick down wg0<br />
;;<br />
esac<br />
}}<br />
If not already running, start and enable {{ic|NetworkManager-dispatcher.service}}.<br />
Also, make sure that NetworkManager is not managing routes for {{ic|wg0}} ([[#Routes are periodically reset|see above]]).<br />
<br />
== See also ==<br />
<br />
* [[Wikipedia:WireGuard]]<br />
* [https://www.wireguard.com/presentations/ Presentations by Jason Donenfeld].<br />
* [https://lists.zx2c4.com/mailman/listinfo/wireguard Mailing list]<br />
* [https://docs.sweeting.me/s/wireguard Unofficial WireGuard Documentation]<br />
* [[Debian:Wireguard]]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman/Tips_and_tricks&diff=655240Pacman/Tips and tricks2021-03-17T15:21:53Z<p>Cmsigler: Add overlay mount of read-only cache method</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[de:Pacman-Tipps]]<br />
[[es:Pacman (Español)/Tips and tricks]]<br />
[[fa:Pacman tips]]<br />
[[fr:Pacman/Trucs et Astuces]]<br />
[[it:Pacman (Italiano)/Tips and tricks]]<br />
[[ja:Pacman ヒント]]<br />
[[pt:Pacman (Português)/Tips and tricks]]<br />
[[ru:Pacman (Русский)/Tips and tricks]]<br />
[[zh-hans:Pacman (简体中文)/Tips and tricks]]<br />
{{Related articles start}}<br />
{{Related|Mirrors}}<br />
{{Related|Creating packages}}<br />
{{Related articles end}}<br />
For general methods to improve the flexibility of the provided tips or ''pacman'' itself, see [[Core utilities]] and [[Bash]].<br />
<br />
== Maintenance ==<br />
<br />
{{Expansion|{{ic|1=Usage=}} introduced with pacman 4.2, see [http://allanmcrae.com/2014/12/pacman-4-2-released/]}}<br />
<br />
{{Note|Instead of using ''comm'' (which requires sorted input with ''sort'') in the sections below, you may also use {{ic|grep -Fxf}} or {{ic|grep -Fxvf}}.}}<br />
<br />
See also [[System maintenance]].<br />
<br />
=== Listing packages ===<br />
<br />
==== With version ====<br />
<br />
You may want to get the list of installed packages with their version, which is useful when reporting bugs or discussing installed packages.<br />
<br />
* List all explicitly installed packages: {{ic|pacman -Qe}}.<br />
* List all packages in the [[package group]] named {{ic|''group''}}: {{ic|pacman -Sg ''group''}}<br />
* List all foreign packages (typically manually downloaded and installed or packages removed from the repositories): {{ic|pacman -Qm}}.<br />
* List all native packages (installed from the sync database(s)): {{ic|pacman -Qn}}.<br />
* List all explicitly installed native packages (i.e. present in the sync database) that are not direct or optional dependencies: {{ic|pacman -Qent}}.<br />
* List packages by regex: {{ic|pacman -Qs ''regex''}}.<br />
* List packages by regex with custom output format (needs {{Pkg|expac}}): {{ic|expac -s "%-30n %v" ''regex''}}.<br />
<br />
==== With size ====<br />
<br />
Figuring out which packages are largest can be useful when trying to free space on your hard drive. There are two options here: get the size of individual packages, or get the size of packages and their dependencies.<br />
<br />
===== Individual packages =====<br />
<br />
The following command will list all installed packages and their individual sizes:<br />
<br />
$ LC_ALL=C pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -h<br />
<br />
===== Packages and dependencies =====<br />
<br />
To list package sizes with their dependencies,<br />
<br />
* Install {{Pkg|expac}} and run {{ic|<nowiki>expac -H M '%m\t%n' | sort -h</nowiki>}}.<br />
* Run {{Pkg|pacgraph}} with the {{ic|-c}} option.<br />
<br />
To list the download size of several packages (leave {{ic|''packages''}} blank to list all packages):<br />
<br />
$ expac -S -H M '%k\t%n' ''packages''<br />
<br />
To list explicitly installed packages not in the [[meta package]] {{Pkg|base}} nor [[package group]] {{Grp|base-devel}} with size and description:<br />
<br />
$ expac -H M "%011m\t%-20n\t%10d" $(comm -23 <(pacman -Qqen | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort | uniq)) | sort -n<br />
<br />
To list the packages marked for upgrade with their download size<br />
<br />
$ expac -S -H M '%k\t%n' $(pacman -Qqu) | sort -sh<br />
<br />
==== By date ====<br />
<br />
To list the 20 last installed packages with {{Pkg|expac}}, run:<br />
<br />
$ expac --timefmt='%Y-%m-%d %T' '%l\t%n' | sort | tail -n 20<br />
<br />
or, with seconds since the epoch (1970-01-01 UTC):<br />
<br />
$ expac --timefmt=%s '%l\t%n' | sort -n | tail -n 20<br />
<br />
==== Not in a specified group, repository or meta package ====<br />
<br />
{{Note|To get a list of packages installed as dependencies but no longer required by any installed package, see [[#Removing unused packages (orphans)]].<br />
}}<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} [[meta package]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <(expac -l '\n' '%E' base | sort)<br />
<br />
List explicitly installed packages not in the {{Pkg|base}} meta package or {{Grp|base-devel}} [[package group]]:<br />
<br />
$ comm -23 <(pacman -Qqe | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
List all installed packages unrequired by other packages, and which are not in the {{Pkg|base}} meta package or {{Grp|base-devel}} package group:<br />
<br />
$ comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u)<br />
<br />
As above, but with descriptions:<br />
<br />
$ expac -H M '%-20n\t%10d' $(comm -23 <(pacman -Qqt | sort) <({ pacman -Qqg base-devel; expac -l '\n' '%E' base; } | sort -u))<br />
<br />
List all installed packages that are ''not'' in the specified repository ''repo_name''<br />
<br />
$ comm -23 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all installed packages that are in the ''repo_name'' repository:<br />
<br />
$ comm -12 <(pacman -Qq | sort) <(pacman -Sql ''repo_name'' | sort)<br />
<br />
List all packages on the Arch Linux ISO that are not in the {{Pkg|base}} meta package:<br />
<br />
<nowiki>$ comm -23 <(curl https://gitlab.archlinux.org/archlinux/archiso/-/raw/master/configs/releng/packages.x86_64) <(expac -l '\n' '%E' base | sort)</nowiki><br />
<br />
==== Development packages ====<br />
<br />
To list all development/unstable packages, run:<br />
<br />
$ pacman -Qq | grep -Ee '-(bzr|cvs|darcs|git|hg|svn)$'<br />
<br />
=== Browsing packages ===<br />
<br />
To browse all installed packages with an instant preview of each package:<br />
<br />
$ pacman -Qq | fzf --preview 'pacman -Qil {}' --layout=reverse --bind 'enter:execute(pacman -Qil {} | less)'<br />
<br />
This uses [[fzf]] to present a two-pane view listing all packages with package info shown on the right.<br />
<br />
Enter letters to filter the list of packages; use arrow keys (or {{ic|Ctrl-j}}/{{ic|Ctrl-k}}) to navigate; press {{ic|Enter}} to see package info under ''less''.<br />
<br />
To browse all packages currently known to pacman (both installed and not yet installed) in a similar way, using fzf, use:<br />
<br />
$ pacman -Slq | fzf --preview 'pacman -Si {}' --layout=reverse'<br />
<br />
The navigational keybindings are the same, although Enter will not work in the same way.<br />
<br />
=== Listing files owned by a package with size ===<br />
<br />
This one might come in handy if you have found that a specific package uses a huge amount of space and you want to find out which files make up the most of that.<br />
<br />
$ pacman -Qlq ''package'' | grep -v '/$' | xargs -r du -h | sort -h<br />
<br />
=== Identify files not owned by any package ===<br />
<br />
If your system has stray files not owned by any package (a common case if you do not [[Enhance system stability#Use the package manager to install software|use the package manager to install software]]), you may want to find such files in order to clean them up.<br />
<br />
One method is to use {{ic|pacreport --unowned-files}} as the root user from {{Pkg|pacutils}} which will list unowned files among other details.<br />
<br />
Another is to list all files of interest and check them against pacman:<br />
<br />
# find /etc /usr /opt /var | LC_ALL=C pacman -Qqo - 2>&1 >&- >/dev/null | cut -d ' ' -f 5-<br />
<br />
{{Tip|The {{Pkg|lostfiles}} script performs similar steps, but also includes an extensive blacklist to remove common false positives from the output.}}<br />
<br />
=== Tracking unowned files created by packages ===<br />
<br />
Most systems will slowly collect several [http://ftp.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html#S3-RPM-INSIDE-FLIST-GHOST-DIRECTIVE ghost] files such as state files, logs, indexes, etc. through the course of usual operation.<br />
<br />
{{ic|pacreport}} from {{Pkg|pacutils}} can be used to track these files and their associations via {{ic|/etc/pacreport.conf}} (see {{man|1|pacreport|FILES}}).<br />
<br />
An example may look something like this (abridged):<br />
<br />
{{hc|/etc/pacreport.conf|<nowiki><br />
[Options]<br />
IgnoreUnowned = usr/share/applications/mimeinfo.cache<br />
<br />
[PkgIgnoreUnowned]<br />
alsa-utils = var/lib/alsa/asound.state<br />
bluez = var/lib/bluetooth<br />
ca-certificates = etc/ca-certificates/trust-source/*<br />
dbus = var/lib/dbus/machine-id<br />
glibc = etc/ld.so.cache<br />
grub = boot/grub/*<br />
linux = boot/initramfs-linux.img<br />
pacman = var/lib/pacman/local<br />
update-mime-database = usr/share/mime/magic<br />
</nowiki>}}<br />
<br />
Then, when using {{ic|pacreport --unowned-files}} as the root user, any unowned files will be listed if the associated package is no longer installed (or if any new files have been created).<br />
<br />
Additionally, [https://github.com/CyberShadow/aconfmgr aconfmgr] ({{AUR|aconfmgr-git}}) allows tracking modified and orphaned files using a configuration script.<br />
<br />
=== Removing unused packages (orphans) ===<br />
<br />
For recursively removing orphans and their configuration files:<br />
<br />
# pacman -Qtdq | pacman -Rns -<br />
<br />
If no orphans were found, the output is {{ic|error: argument '-' specified with empty stdin}}. This is expected as no arguments were passed to {{ic|pacman -Rns}}.<br />
<br />
{{Note|The arguments {{ic|-Qt}} list only true orphans. To include packages which are ''optionally'' required by another package, pass the {{ic|-t}} flag twice (''i.e.'', {{ic|-Qtt}}).}}<br />
<br />
=== Removing everything but essential packages ===<br />
<br />
If it is ever necessary to remove all packages except the essentials packages, one method is to set the installation reason of the non-essential ones as dependency and then remove all unnecessary dependencies.<br />
<br />
First, for all the packages installed "as explicitly", change their installation reason to "as dependency":<br />
<br />
# pacman -D --asdeps $(pacman -Qqe)<br />
<br />
Then, change the installation reason to "as explicitly" of only the essential packages, those you '''do not''' want to remove, in order to avoid targeting them:<br />
<br />
# pacman -D --asexplicit base linux linux-firmware<br />
<br />
{{Note|<br />
* Additional packages can be added to the above command in order to avoid being removed. See [[Installation guide#Install essential packages]] for more info on other packages that may be necessary for a fully functional base system.<br />
* This will also select the bootloader's package for removal. The system should still be bootable, but the boot parameters might not be changeable without it.<br />
}}<br />
<br />
Finally, follow the instructions in [[#Removing unused packages (orphans)]] to remove all packages that have installation reason "as dependency".<br />
<br />
=== Getting the dependencies list of several packages ===<br />
<br />
Dependencies are alphabetically sorted and doubles are removed.<br />
<br />
{{Note|To only show the tree of local installed packages, use {{ic|pacman -Qi}}.}}<br />
<br />
$ LC_ALL=C pacman -Si ''packages'' | awk -F'[:<=>]' '/^Depends/ {print $2}' | xargs -n1 | sort -u<br />
<br />
Alternatively, with {{Pkg|expac}}: <br />
<br />
$ expac -l '\n' %E -S ''packages'' | sort -u<br />
<br />
=== Listing changed backup files ===<br />
<br />
{{Accuracy|What is the connection of this section to [[System backup]]? Listing modified "backup files" does not show files which are not tracked by pacman.|section=Warning about listing changed backup files}}<br />
<br />
If you want to back up your system configuration files, you could copy all files in {{ic|/etc/}} but usually you are only interested in the files that you have changed. Modified [[Pacnew_and_Pacsave_files#Package_backup_files|backup files]] can be viewed with the following command:<br />
<br />
# pacman -Qii | awk '/^MODIFIED/ {print $2}'<br />
<br />
Running this command with root permissions will ensure that files readable only by root (such as {{ic|/etc/sudoers}}) are included in the output.<br />
<br />
{{Tip|See [[#Listing all changed files from packages]] to list all changed files ''pacman'' knows about, not only backup files.}}<br />
<br />
=== Back up the pacman database ===<br />
<br />
The following command can be used to back up the local ''pacman'' database:<br />
<br />
$ tar -cjf pacman_database.tar.bz2 /var/lib/pacman/local<br />
<br />
Store the backup ''pacman'' database file on one or more offline media, such as a USB stick, external hard drive, or CD-R.<br />
<br />
The database can be restored by moving the {{ic|pacman_database.tar.bz2}} file into the {{ic|/}} directory and executing the following command:<br />
<br />
# tar -xjvf pacman_database.tar.bz2<br />
<br />
{{Note|If the ''pacman'' database files are corrupted, and there is no backup file available, there exists some hope of rebuilding the ''pacman'' database. Consult [[#Restore pacman's local database]].}}<br />
<br />
{{Tip|The {{AUR|pakbak-git}} package provides a script and a [[systemd]] service to automate the task. Configuration is possible in {{ic|/etc/pakbak.conf}}.}}<br />
<br />
=== Check changelogs easily ===<br />
<br />
When maintainers update packages, commits are often commented in a useful fashion. Users can quickly check these from the command line by installing {{AUR|pacolog}}. This utility lists recent commit messages for packages from the official repositories or the AUR, by using {{ic|pacolog <package>}}.<br />
<br />
== Installation and recovery ==<br />
<br />
Alternative ways of getting and restoring packages.<br />
<br />
=== Installing packages from a CD/DVD or USB stick ===<br />
<br />
{{Merge|#Custom local repository|Use as an example and avoid duplication}}<br />
<br />
To download packages, or groups of packages:<br />
<br />
# cd ~/Packages<br />
# pacman -Syw base base-devel grub-bios xorg gimp --cachedir .<br />
# repo-add ./custom.db.tar.gz ./*<br />
<br />
Then you can burn the "Packages" folder to a CD/DVD or transfer it to a USB stick, external HDD, etc.<br />
<br />
To install:<br />
<br />
'''1.''' Mount the media:<br />
<br />
# mkdir /mnt/repo<br />
# mount /dev/sr0 /mnt/repo #For a CD/DVD.<br />
# mount /dev/sdxY /mnt/repo #For a USB stick.<br />
<br />
'''2.''' Edit {{ic|pacman.conf}} and add this repository ''before'' the other ones (e.g. extra, core, etc.). This is important. Do not just uncomment the one on the bottom. This way it ensures that the files from the CD/DVD/USB take precedence over those in the standard repositories:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
[custom]<br />
SigLevel = PackageRequired<br />
Server = file:///mnt/repo/Packages}}<br />
<br />
'''3.''' Finally, synchronize the ''pacman'' database to be able to use the new repository:<br />
<br />
# pacman -Syu<br />
<br />
=== Custom local repository ===<br />
<br />
Use the ''repo-add'' script included with ''pacman'' to generate a database for a personal repository. Use {{ic|repo-add --help}} for more details on its usage. <br />
A package database is a tar file, optionally compressed. Valid extensions are ''.db'' or ''.files'' followed by an archive extension of ''.tar'', ''.tar.gz'', ''.tar.bz2'', ''.tar.xz'', ''.tar.zst'', or ''.tar.Z''. The file does not need to exist, but all parent directories must exist.<br />
<br />
To add a new package to the database, or to replace the old version of an existing package in the database, run:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/package-1.0-1-x86_64.pkg.tar.xz''<br />
<br />
The database and the packages do not need to be in the same directory when using ''repo-add'', but keep in mind that when using ''pacman'' with that database, they should be together. Storing all the built packages to be included in the repository in one directory also allows to use shell glob expansion to add or update multiple packages at once:<br />
<br />
$ repo-add ''/path/to/repo.db.tar.gz /path/to/*.pkg.tar.xz''<br />
<br />
{{Warning|''repo-add'' adds the entries into the database in the same order as passed on the command line. If multiple versions of the same package are involved, care must be taken to ensure that the correct version is added last. In particular, note that lexical order used by the shell depends on the locale and differs from the {{man|8|vercmp}} ordering used by ''pacman''.}}<br />
<br />
If you are looking to support multiple architectures then precautions should be taken to prevent errors from occurring. Each architecture should have its own directory tree:<br />
<br />
{{hc|$ tree ~/customrepo/ {{!}} sed "s/$(uname -m)/<arch>/g"|<br />
/home/archie/customrepo/<br />
└── <arch><br />
├── customrepo.db -> customrepo.db.tar.xz<br />
├── customrepo.db.tar.xz<br />
├── customrepo.files -> customrepo.files.tar.xz<br />
├── customrepo.files.tar.xz<br />
└── personal-website-git-b99cce0-1-<arch>.pkg.tar.xz<br />
<br />
1 directory, 5 files<br />
}}<br />
<br />
The ''repo-add'' executable checks if the package is appropriate. If this is not the case you will be running into error messages similar to this:<br />
<br />
==> ERROR: '/home/archie/customrepo/<arch>/foo-<arch>.pkg.tar.xz' does not have a valid database archive extension.<br />
<br />
''repo-remove'' is used to remove packages from the package database, except that only package names are specified on the command line.<br />
<br />
$ repo-remove ''/path/to/repo.db.tar.gz pkgname''<br />
<br />
Once the local repository database has been created, add the repository to {{ic|pacman.conf}} for each system that is to use the repository. An example of a custom repository is in {{ic|pacman.conf}}. The repository's name is the database filename with the file extension omitted. In the case of the example above the repository's name would simply be ''repo''. Reference the repository's location using a {{ic|file://}} url, or via FTP using ftp://localhost/path/to/directory.<br />
<br />
If willing, add the custom repository to the [[Unofficial user repositories|list of unofficial user repositories]], so that the community can benefit from it.<br />
<br />
=== Network shared pacman cache ===<br />
{{Merge|Package_Proxy_Cache|Same topic}}<br />
If you happen to run several Arch boxes on your LAN, you can share packages so that you can greatly decrease your download times. Keep in mind you should not share between different architectures (i.e. i686 and x86_64) or you will run into problems.<br />
<br />
==== Read-only cache ====<br />
<br />
If you are looking for a quick solution, you can simply run a standalone webserver, e.g. {{Pkg|darkhttpd}}, which other computers can use as a first mirror:<br />
<br />
# ln -s /var/lib/pacman/sync/*.db /var/cache/pacman/pkg<br />
$ sudo -u http darkhttpd /var/cache/pacman/pkg --no-server-id<br />
<br />
You could also run darkhttpd as a systemd service for convenience. Just add this server at the top of your {{ic|/etc/pacman.d/mirrorlist}} in client machines with {{ic|1=Server = http&#58;//mymirror:8080}}. Make sure to keep your mirror updated.<br />
<br />
If you are already running a web server for some other purpose, you might wish to reuse that as your local repo server instead of darkhttpd. For example, if you already serve a site with [[nginx]], you can add an nginx server block listening on port 8080:<br />
<br />
{{hc|/etc/nginx/nginx.conf|<br />
server {<br />
listen 8080;<br />
root /var/cache/pacman/pkg;<br />
server_name myarchrepo.localdomain;<br />
try_files $uri $uri/;<br />
}<br />
}}<br />
<br />
Remember to restart nginx after making this change.<br />
<br />
Whichever web server you use, remember to open port 8080 to local traffic (and you probably want to deny anything not local), so add a rule like the following to [[iptables]]:<br />
<br />
{{hc|/etc/iptables/iptables.rules|<br />
-A TCP -s 192.168.0.0/16 -p tcp -m tcp --dport 8080 -j ACCEPT<br />
}}<br />
<br />
Remember to restart iptables after making this change.<br />
<br />
==== Overlay mount of read-only cache ====<br />
<br />
It is possible to use one machine on a local network as a read-only package cache by [[Overlay_filesystem|overlay mounting]] its {{ic|/var/cache/pacman/pkg}} directory. Such a configuration is advantageous if this server has installed on it a reasonably comprehensive selection of up-to-date packages which are also used by other boxes. This is useful for maintaining a number of machines at the end of a low bandwidth upstream connection.<br />
<br />
As an example, to use this method:<br />
<br />
# mkdir /tmp/remote_pkg /mnt/workdir_pkg /tmp/pacman_pkg<br />
# sshfs <remote_username>@<remote_pkgcache_addr>:/var/cache/pacman/pkg /tmp/remote_pkg -C<br />
# mount -t overlay overlay -o lowerdir=/tmp/remote_pkg,upperdir=/var/cache/pacman/pkg,workdir=/mnt/workdir_pkg /tmp/pacman_pkg<br />
<br />
[[Overlay_filesystem#Usage|Note concerning overlay]]: The working directory must be an empty directory on the same mounted device as the upper directory.<br />
<br />
After this, run pacman using the option {{ic|--cachedir /tmp/pacman_pkg}}, e.g.:<br />
<br />
# pacman -Syu --cachedir /tmp/pacman_pkg<br />
<br />
==== Distributed read-only cache ====<br />
<br />
There are Arch-specific tools for automatically discovering other computers on your network offering a package cache. Try {{Pkg|pacredir}}, [[pacserve]], {{AUR|pkgdistcache}}, or {{AUR|paclan}}. pkgdistcache uses Avahi instead of plain UDP which may work better in certain home networks that route instead of bridge between WiFi and Ethernet.<br />
<br />
Historically, there was [https://bbs.archlinux.org/viewtopic.php?id=64391 PkgD] and [https://github.com/toofishes/multipkg multipkg], but they are no longer maintained.<br />
<br />
==== Read-write cache ====<br />
<br />
In order to share packages between multiple computers, simply share {{ic|/var/cache/pacman/}} using any network-based mount protocol. This section shows how to use [[shfs]] or [[SSHFS]] to share a package cache plus the related library-directories between multiple computers on the same local network. Keep in mind that a network shared cache can be slow depending on the file-system choice, among other factors.<br />
<br />
First, install any network-supporting filesystem packages: {{pkg|shfs-utils}}, {{pkg|sshfs}}, {{pkg|curlftpfs}}, {{pkg|samba}} or {{pkg|nfs-utils}}.<br />
<br />
{{Tip|<br />
* To use ''sshfs'' or ''shfs'', consider reading [[Using SSH Keys]].<br />
* By default, ''smbfs'' does not serve filenames that contain colons, which results in the client downloading the offending package afresh. To prevent this, use the {{ic|mapchars}} mount option on the client.<br />
}}<br />
<br />
Then, to share the actual packages, mount {{ic|/var/cache/pacman/pkg}} from the server to {{ic|/var/cache/pacman/pkg}} on every client machine.<br />
<br />
{{Warning|Do not make {{ic|/var/cache/pacman/pkg}} or any of its ancestors (e.g., {{ic|/var}}) a symlink. ''Pacman'' expects these to be directories. When ''pacman'' re-installs or upgrades itself, it will remove the symlinks and create empty directories instead. However during the transaction ''pacman'' relies on some files residing there, hence breaking the update process. Refer to {{bug|50298}} for further details.}}<br />
<br />
==== two-way with rsync ====<br />
<br />
Another approach in a local environment is [[rsync]]. Choose a server for caching and enable the [[Rsync#rsync daemon]]. On clients synchronize two-way with this share via the rsync protocol. Filenames that contain colons are no problem for the rsync protocol.<br />
<br />
Draft example for a client, using {{ic|uname -m}} within the share name ensures an architecture-dependent sync:<br />
# rsync rsync://server/share_$(uname -m)/ /var/cache/pacman/pkg/ ...<br />
# pacman ...<br />
# paccache ...<br />
# rsync /var/cache/pacman/pkg/ rsync://server/share_$(uname -m)/ ...<br />
<br />
==== Dynamic reverse proxy cache using nginx ====<br />
<br />
[[nginx]] can be used to proxy package requests to official upstream mirrors and cache the results to the local disk. All subsequent requests for that package will be served directly from the local cache, minimizing the amount of internet traffic needed to update a large number of computers. <br />
<br />
In this example, the cache server will run at {{ic|<nowiki>http://cache.domain.example:8080/</nowiki>}} and store the packages in {{ic|/srv/http/pacman-cache/}}. <br />
<br />
Install [[nginx]] on the computer that is going to host the cache. Create the directory for the cache and adjust the permissions so nginx can write files to it:<br />
<br />
# mkdir /srv/http/pacman-cache<br />
# chown http:http /srv/http/pacman-cache<br />
<br />
Use the [https://github.com/nastasie-octavian/nginx_pacman_cache_config/blob/c54eca4776ff162ab492117b80be4df95880d0e2/nginx.conf nginx pacman cache config] as a starting point for {{ic|/etc/nginx/nginx.conf}}. Check that the {{ic|resolver}} directive works for your needs. In the upstream server blocks, configure the {{ic|proxy_pass}} directives with addresses of official mirrors, see examples in the config file about the expected format. Once you are satisfied with the configuration file [[Nginx#Running|start and enable nginx]].<br />
<br />
In order to use the cache each Arch Linux computer (including the one hosting the cache) must have the following line at the top of the {{ic|mirrorlist}} file:<br />
<br />
{{hc|/etc/pacman.d/mirrorlist|<nowiki><br />
Server = http://cache.domain.example:8080/$repo/os/$arch<br />
...<br />
</nowiki>}}<br />
<br />
{{Note| You will need to create a method to clear old packages, as the cache directory will continue to grow over time. {{ic|paccache}} (which is provided by {{pkg|pacman-contrib}}) can be used to automate this using retention criteria of your choosing. For example, {{ic|find /srv/http/pacman-cache/ -type d -exec paccache -v -r -k 2 -c {} \;}} will keep the last 2 versions of packages in your cache directory.}}<br />
<br />
==== Pacoloco proxy cache server ====<br />
<br />
[https://github.com/anatol/pacoloco Pacoloco] is an easy-to-use proxy cache server for pacman repositories. It can be installed as {{pkg|pacoloco}}. Open the configuration file and add pacman mirrors:<br />
<br />
{{hc|/etc/pacoloco.yaml|<nowiki><br />
port: 9129<br />
repos:<br />
mycopy:<br />
urls:<br />
- http://mirror.lty.me/archlinux<br />
- http://mirrors.kernel.org/archlinux<br />
</nowiki>}}<br />
<br />
[[Restart]] {{ic|pacoloco.service}} and the proxy repository will be available at {{ic|http://<myserver>:9129/repo/mycopy}}.<br />
<br />
==== Flexo proxy cache server ====<br />
<br />
[https://github.com/nroi/flexo Flexo] is yet another proxy cache server for pacman repositories. Flexo is available on the AUR: {{AUR|flexo-git}}. Once installed, [[start]] the {{ic|flexo.service}} service with systemd.<br />
<br />
Flexo runs on port 7878 by default. Enter {{ic|1=Server = http://''myserver'':7878/$repo/os/$arch}} to the top of your {{ic|/etc/pacman.d/mirrorlist}} so that pacman downloads packages via Flexo.<br />
<br />
==== Synchronize pacman package cache using synchronization programs ====<br />
<br />
Use [[Syncthing]] or [[Resilio Sync]] to synchronize the ''pacman'' cache folders (i.e. {{ic|/var/cache/pacman/pkg}}).<br />
<br />
==== Preventing unwanted cache purges ====<br />
<br />
By default, {{Ic|pacman -Sc}} removes package tarballs from the cache that correspond to packages that are not installed on the machine the command was issued on. Because ''pacman'' cannot predict what packages are installed on all machines that share the cache, it will end up deleting files that should not be.<br />
<br />
To clean up the cache so that only ''outdated'' tarballs are deleted, add this entry in the {{ic|[options]}} section of {{ic|/etc/pacman.conf}}:<br />
<br />
CleanMethod = KeepCurrent<br />
<br />
=== Recreate a package from the file system ===<br />
<br />
To recreate a package from the file system, use {{AUR|fakepkg}}. Files from the system are taken as they are, hence any modifications will be present in the assembled package. Distributing the recreated package is therefore discouraged; see [[ABS]] and [[Arch Linux Archive]] for alternatives.<br />
<br />
=== List of installed packages ===<br />
<br />
Keeping a list of all the explicitly installed packages can be useful, to backup a system for example or speed up installation on a new system:<br />
<br />
$ pacman -Qqe > pkglist.txt<br />
<br />
{{Note|<br />
* With option {{ic|-t}}, the packages already required by other explicitly installed packages are not mentioned. If reinstalling from this list they will be installed but as dependencies only.<br />
* With option {{ic|-n}}, foreign packages (e.g. from [[AUR]]) would be omitted from the list.<br />
* Use {{ic|comm -13 <(pacman -Qqdt {{!}} sort) <(pacman -Qqdtt {{!}} sort) > optdeplist.txt}} to also create a list of the installed optional dependencies which can be reinstalled with {{ic|--asdeps}}.<br />
* Use {{ic|pacman -Qqem > foreignpkglist.txt}} to create the list of AUR and other foreign packages that have been explicitly installed.}}<br />
<br />
To keep an up-to-date list of explicitly installed packages (e.g. in combination with a versioned {{ic|/etc/}}), you can set up a [[Pacman#Hooks|hook]]. Example:<br />
<br />
[Trigger]<br />
Operation = Install<br />
Operation = Remove<br />
Type = Package<br />
Target = *<br />
<br />
[Action]<br />
When = PostTransaction<br />
Exec = /bin/sh -c '/usr/bin/pacman -Qqe > /etc/pkglist.txt'<br />
<br />
=== Install packages from a list ===<br />
<br />
To install packages from a previously saved list of packages, while not reinstalling previously installed packages that are already up-to-date, run:<br />
<br />
# pacman -S --needed - < pkglist.txt<br />
<br />
However, it is likely foreign packages such as from the AUR or installed locally are present in the list. To filter out from the list the foreign packages, the previous command line can be enriched as follows:<br />
<br />
# pacman -S --needed $(comm -12 <(pacman -Slq | sort) <(sort pkglist.txt))<br />
<br />
Eventually, to make sure the installed packages of your system match the list and remove all the packages that are not mentioned in it:<br />
<br />
# pacman -Rsu $(comm -23 <(pacman -Qq | sort) <(sort pkglist.txt))<br />
<br />
{{Tip|These tasks can be automated. See {{AUR|bacpac}}, {{AUR|packup}}, {{AUR|pacmanity}}, and {{AUR|pug}} for examples.}}<br />
<br />
=== Listing all changed files from packages ===<br />
<br />
If you are suspecting file corruption (e.g. by software/hardware failure), but are unsure if files were corrupted, you might want to compare with the hash sums in the packages. This can be done with {{Pkg|pacutils}}:<br />
<br />
# paccheck --md5sum --quiet<br />
<br />
For recovery of the database see [[#Restore pacman's local database]]. The {{ic|mtree}} files can also be [[#Viewing a single file inside a .pkg file|extracted as {{ic|.MTREE}} from the respective package files]].<br />
<br />
{{Note|This should '''not''' be used as is when suspecting malicious changes! In this case security precautions such as using a live medium and an independent source for the hash sums are advised.}}<br />
<br />
=== Reinstalling all packages ===<br />
To reinstall all native packages, use:<br />
<br />
# pacman -Qqn | pacman -S -<br />
<br />
Foreign (AUR) packages must be reinstalled separately; you can list them with {{ic|pacman -Qqm}}.<br />
<br />
''Pacman'' preserves the [[installation reason]] by default.<br />
<br />
{{Warning|To force all packages to be overwritten, use {{ic|1=--overwrite=*}}, though this should be an absolute last resort. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== Restore pacman's local database ===<br />
<br />
See [[Pacman/Restore local database]].<br />
<br />
=== Recovering a USB key from existing install ===<br />
<br />
If you have Arch installed on a USB key and manage to mess it up (e.g. removing it while it is still being written to), then it is possible to re-install all the packages and hopefully get it back up and working again (assuming USB key is mounted in {{ic|/newarch}})<br />
<br />
# pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman<br />
<br />
=== Viewing a single file inside a .pkg file ===<br />
<br />
For example, if you want to see the contents of {{ic|/etc/systemd/logind.conf}} supplied within the {{Pkg|systemd}} package:<br />
<br />
$ bsdtar -xOf /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz etc/systemd/logind.conf<br />
<br />
Or you can use {{pkg|vim}} to browse the archive:<br />
<br />
$ vim /var/cache/pacman/pkg/systemd-204-3-x86_64.pkg.tar.xz<br />
<br />
=== Find applications that use libraries from older packages ===<br />
<br />
Even if you installed a package the existing long-running programs (like daemons and servers) still keep using code from old package libraries. And it is a bad idea to let these programs running if the old library contains a security bug.<br />
<br />
Here is a way how to find all the programs that use old packages code:<br />
<br />
# lsof +c 0 | grep -w DEL | awk '1 { print $1 ": " $NF }' | sort -u<br />
It will print running program name and old library that was removed or replaced with newer content.<br />
<br />
=== Installing only content in required languages ===<br />
<br />
Many packages attempt to install documentation and translations in several languages. Some programs are designed to remove such unnecessary files, such as {{AUR|localepurge}}, which runs after a package is installed to delete the unneeded locale files. A more direct approach is provided through the {{ic|NoExtract}} directive in {{ic|pacman.conf}}, which prevent these files from ever being installed.<br />
<br />
{{Warning|1=Some users noted that removing locales has resulted in [[Special:Permalink/460285#Dangerous NoExtract example|unintended consequences]], even under [https://bbs.archlinux.org/viewtopic.php?id=250846 Xorg].}}<br />
<br />
The example below installs English (US) files, or none at all:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
NoExtract = usr/share/help/* !usr/share/help/C/*<br />
NoExtract = usr/share/gtk-doc/html/*<br />
NoExtract = usr/share/locale/* usr/share/X11/locale/*/* usr/share/i18n/locales/* opt/google/chrome/locales/* !usr/share/X11/locale/C/*<br />
NoExtract = !*locale*/en*/* !usr/share/*locale*/locale.*<br />
NoExtract = !usr/share/*locales/en_?? !usr/share/*locales/i18n* !usr/share/*locales/iso*<br />
NoExtract = usr/share/i18n/charmaps/* !usr/share/i18n/charmaps/UTF-8.gz<br />
NoExtract = !usr/share/*locales/trans*<br />
NoExtract = usr/share/man/* !usr/share/man/man*<br />
NoExtract = usr/share/vim/vim*/lang/*<br />
NoExtract = usr/lib/libreoffice/help/en-US/*<br />
NoExtract = usr/share/kbd/locale/*<br />
NoExtract = usr/share/*/translations/*.qm usr/share/qt/translations/*.pak !*/en-US.pak # Qt apps<br />
NoExtract = usr/share/*/locales/*.pak opt/*/locales/*.pak usr/lib/*/locales/*.pak !*/en-US.pak # Electron apps<br />
NoExtract = opt/onlyoffice/desktopeditors/dictionaries/* !opt/onlyoffice/desktopeditors/dictionaries/en_US/*<br />
NoExtract = usr/share/ibus/dicts/emoji-*.dict !usr/share/ibus/dicts/emoji-en.dict<br />
}}<br />
<br />
== Performance ==<br />
<br />
=== Download speeds ===<br />
<br />
{{Note|If your download speeds have been reduced to a crawl, ensure you are using one of the many [[mirrors]] and not ftp.archlinux.org, which is [https://archlinux.org/news/302/ throttled since March 2007].}}<br />
<br />
When downloading packages ''pacman'' uses the mirrors in the order they are in {{ic|/etc/pacman.d/mirrorlist}}. The mirror which is at the top of the list by default however may not be the fastest for you. To select a faster mirror, see [[Mirrors]].<br />
<br />
''Pacman''<nowiki>'</nowiki>s speed in downloading packages can also be improved by using a different application to download packages, instead of ''pacman''<nowiki>'</nowiki>s built-in file downloader.<br />
<br />
In all cases, make sure you have the latest ''pacman'' before doing any modifications.<br />
<br />
# pacman -Syu<br />
<br />
==== Powerpill ====<br />
<br />
[[Powerpill]] is a ''pacman'' wrapper that uses parallel and segmented downloading to try to speed up downloads for ''pacman''.<br />
<br />
==== wget ====<br />
<br />
This is also very handy if you need more powerful proxy settings than ''pacman''<nowiki>'</nowiki>s built-in capabilities. <br />
<br />
To use {{ic|wget}}, first [[install]] the {{Pkg|wget}} package then modify {{ic|/etc/pacman.conf}} by uncommenting the following line in the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/wget --passive-ftp --show-progress -c -q -N %u<br />
<br />
Instead of uncommenting the {{ic|wget}} parameters in {{ic|/etc/pacman.conf}}, you can also modify the {{ic|wget}} configuration file directly (the system-wide file is {{ic|/etc/wgetrc}}, per user files are {{ic|$HOME/.wgetrc}}).<br />
<br />
==== aria2 ====<br />
<br />
[[aria2]] is a lightweight download utility with support for resumable and segmented HTTP/HTTPS and FTP downloads. aria2 allows for multiple and simultaneous HTTP/HTTPS and FTP connections to an Arch mirror, which should result in an increase in download speeds for both file and package retrieval.<br />
<br />
{{Note|Using aria2c in ''pacman''<nowiki>'</nowiki>s XferCommand will '''not''' result in parallel downloads of multiple packages. ''Pacman'' invokes the XferCommand with a single package at a time and waits for it to complete before invoking the next. To download multiple packages in parallel, see [[Powerpill]].}}<br />
<br />
Install {{Pkg|aria2}}, then edit {{ic|/etc/pacman.conf}} by adding the following line to the {{ic|[options]}} section:<br />
<br />
XferCommand = /usr/bin/aria2c --allow-overwrite=true --continue=true --file-allocation=none --log-level=error --max-tries=2 --max-connection-per-server=2 --max-file-not-found=5 --min-split-size=5M --no-conf --remote-time=true --summary-interval=60 --timeout=5 --dir=/ --out %o %u<br />
<br />
{{Tip|1=[https://bbs.archlinux.org/viewtopic.php?pid=1491879#p1491879 This alternative configuration for using ''pacman'' with aria2] tries to simplify configuration and adds more configuration options.}}<br />
<br />
See {{man|1|aria2c|OPTIONS}} for used aria2c options.<br />
<br />
* {{ic|-d, --dir}}: The directory to store the downloaded file(s) as specified by ''pacman''.<br />
* {{ic|-o, --out}}: The output file name(s) of the downloaded file(s). <br />
* {{ic|%o}}: Variable which represents the local filename(s) as specified by ''pacman''.<br />
* {{ic|%u}}: Variable which represents the download URL as specified by ''pacman''.<br />
<br />
==== Other applications ====<br />
<br />
There are other downloading applications that you can use with ''pacman''. Here they are, and their associated XferCommand settings:<br />
<br />
* {{ic|snarf}}: {{ic|1=XferCommand = /usr/bin/snarf -N %u}}<br />
* {{ic|lftp}}: {{ic|1=XferCommand = /usr/bin/lftp -c pget %u}}<br />
* {{ic|axel}}: {{ic|1=XferCommand = /usr/bin/axel -n 2 -v -a -o %o %u}}<br />
* {{ic|hget}}: {{ic|1=XferCommand = /usr/bin/hget %u -n 2 -skip-tls false}} (please read the [https://github.com/huydx/hget documentation on the Github project page] for more info)<br />
* {{ic|saldl}}: {{ic|1=XferCommand = /usr/bin/saldl -c6 -l4 -s2m -o %o %u}} (please read the [https://saldl.github.io documentation on the project page] for more info)<br />
<br />
== Utilities ==<br />
<br />
* {{App|Lostfiles|Script that identifies files not owned by any package.|https://github.com/graysky2/lostfiles|{{Pkg|lostfiles}}}}<br />
* {{App|Pacmatic|''Pacman'' wrapper to check Arch News before upgrading, avoid partial upgrades, and warn about configuration file changes.|http://kmkeen.com/pacmatic|{{Pkg|pacmatic}}}}<br />
* {{App|pacutils|Helper library for libalpm based programs.|https://github.com/andrewgregory/pacutils|{{Pkg|pacutils}}}}<br />
* {{App|[[pkgfile]]|Tool that finds what package owns a file.|https://github.com/falconindy/pkgfile|{{Pkg|pkgfile}}}}<br />
* {{App|pkgtools|Collection of scripts for Arch Linux packages.|https://github.com/Daenyth/pkgtools|{{AUR|pkgtools}}}}<br />
* {{App|pkgtop|Interactive package manager and resource monitor designed for the GNU/Linux.|https://github.com/orhun/pkgtop|{{AUR|pkgtop-git}}}}<br />
* {{App|[[Powerpill]]|Uses parallel and segmented downloading through [[aria2]] and [[Reflector]] to try to speed up downloads for ''pacman''.|https://xyne.archlinux.ca/projects/powerpill/|{{AUR|powerpill}}}}<br />
* {{App|repoctl|Tool to help manage local repositories.|https://github.com/cassava/repoctl|{{AUR|repoctl}}}}<br />
* {{App|repose|An Arch Linux repository building tool.|https://github.com/vodik/repose|{{Pkg|repose}}}}<br />
* {{App|[[Snapper#Wrapping_pacman_transactions_in_snapshots|snap-pac]]|Make ''pacman'' automatically use snapper to create pre/post snapshots like openSUSE's YaST.|https://github.com/wesbarnett/snap-pac|{{pkg|snap-pac}}}}<br />
* {{App|vrms-arch|A virtual Richard M. Stallman to tell you which non-free packages are installed.|https://github.com/orospakr/vrms-arch|{{AUR|vrms-arch-git}}}}<br />
<br />
=== Graphical ===<br />
<br />
{{Warning|PackageKit opens up system permissions by default, and is otherwise not recommended for general usage. See {{Bug|50459}} and {{Bug|57943}}.}}<br />
<br />
* {{App|Apper|Qt 5 application and package manager using PackageKit written in C++. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata].|https://userbase.kde.org/Apper|{{Pkg|apper}}}}<br />
* {{App|Discover|Qt 5 application manager using PackageKit written in C++/QML. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://userbase.kde.org/Discover|{{Pkg|discover}}}}<br />
* {{App|GNOME PackageKit|GTK 3 package manager using PackageKit written in C.|https://freedesktop.org/software/PackageKit/|{{Pkg|gnome-packagekit}}}}<br />
* {{App|GNOME Software|GTK 3 application manager using PackageKit written in C. Supports [https://www.freedesktop.org/wiki/Distributions/AppStream/ AppStream metadata], [[Flatpak]] and [[fwupd|firmware updates]]. |https://wiki.gnome.org/Apps/Software|{{pkg|gnome-software}}}}<br />
* {{App|pcurses|Curses TUI pacman wrapper written in C++.|https://github.com/schuay/pcurses|{{Pkg|pcurses}}}}<br />
* {{App|tkPacman|Tk pacman wrapper written in Tcl.|https://sourceforge.net/projects/tkpacman|{{AUR|tkpacman}}}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=652092User:Cmsigler/RISC-V2021-02-12T18:50:14Z<p>Cmsigler: Update links to archlinux-cross-bootstrap github repo</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test cross-compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*## Test building, executing on host system<br />
*##* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*##* Write hello_world.c, other test programs<br />
*##* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*##* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}} {{bc|$ riscv64-linux-gnu-gcc -static -o hello_world-riscv64-static hello_world-riscv64.o}}<br />
*##* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}} {{bc|$ ./hello_world-riscv64-static}}<br />
*## Test building, executing under RISC-V system chroot<br />
*##* Install, configure Gentoo (later, Arch) RISC-V system under chroot subdirectory<br />
*##* Copy source files and cross-compiled RISC-V binaries from host system into chroot<br />
*##* Enter RISC-V chroot<br />
*##* Compile, link source files inside chroot using native gcc<br />
*##* Test running both host cross-compiled and chroot native-compiled binaries inside chroot<br />
*## Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv-abandoned/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv-abandoned/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv-abandoned/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
Obsolete: [https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://www.gentoo.org/downloads/#riscv Gentoo RISC-V stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv-abandoned/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
https://github.com/michaeljclark/busybear-linux<br />
<br />
https://github.com/janvrany/riscv-debian<br />
<br />
(http://dl.sipeed.com/shareURL/MAIX/K210_Linux/Firmware)<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=652090User:Cmsigler/RISC-V2021-02-12T18:35:21Z<p>Cmsigler: Add commands, info for compiling RISC-V images -static</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test cross-compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*## Test building, executing on host system<br />
*##* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*##* Write hello_world.c, other test programs<br />
*##* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*##* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}} {{bc|$ riscv64-linux-gnu-gcc -static -o hello_world-riscv64-static hello_world-riscv64.o}}<br />
*##* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}} {{bc|$ ./hello_world-riscv64-static}}<br />
*## Test building, executing under RISC-V system chroot<br />
*##* Install, configure Gentoo (later, Arch) RISC-V system under chroot subdirectory<br />
*##* Copy source files and cross-compiled RISC-V binaries from host system into chroot<br />
*##* Enter RISC-V chroot<br />
*##* Compile, link source files inside chroot using native gcc<br />
*##* Test running both host cross-compiled and chroot native-compiled binaries inside chroot<br />
*## Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
Obsolete: [https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://www.gentoo.org/downloads/#riscv Gentoo RISC-V stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
https://github.com/michaeljclark/busybear-linux<br />
<br />
https://github.com/janvrany/riscv-debian<br />
<br />
(http://dl.sipeed.com/shareURL/MAIX/K210_Linux/Firmware)<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=651946User:Cmsigler/RISC-V2021-02-11T15:59:29Z<p>Cmsigler: Update, add reference links</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test cross-compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*## Test building, executing on host system<br />
*##* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*##* Write hello_world.c, other test programs<br />
*##* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*##* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*##* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*## Test building, executing under RISC-V system chroot<br />
*##* Install, configure Gentoo (later, Arch) RISC-V system under chroot subdirectory<br />
*##* Copy source files and cross-compiled RISC-V binaries from host system into chroot<br />
*##* Enter RISC-V chroot<br />
*##* Compile, link source files inside chroot using native gcc<br />
*##* Test running both host cross-compiled and chroot native-compiled binaries inside chroot<br />
*## Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
Obsolete: [https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://www.gentoo.org/downloads/#riscv Gentoo RISC-V stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
https://github.com/michaeljclark/busybear-linux<br />
<br />
https://github.com/janvrany/riscv-debian<br />
<br />
(http://dl.sipeed.com/shareURL/MAIX/K210_Linux/Firmware)<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=651918User:Cmsigler/RISC-V2021-02-11T11:06:50Z<p>Cmsigler: Add instructions to test compiling, linking under RISC-V chroot, and test pre-built programs</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test cross-compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*## Test building, executing on host system<br />
*##* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*##* Write hello_world.c, other test programs<br />
*##* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*##* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*##* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*## Test building, executing under RISC-V system chroot<br />
*##* Install, configure Gentoo (later, Arch) RISC-V system under chroot subdirectory<br />
*##* Copy source files and cross-compiled RISC-V binaries from host system into chroot<br />
*##* Enter RISC-V chroot<br />
*##* Compile, link source files inside chroot using native gcc<br />
*##* Test running both host cross-compiled and chroot native-compiled binaries inside chroot<br />
*## Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=651913User:Cmsigler/RISC-V2021-02-11T10:25:37Z<p>Cmsigler: Add recommended options for emerge @world</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge --ask --verbose --update --deep --newuse @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=651862User:Cmsigler/RISC-V2021-02-10T21:42:46Z<p>Cmsigler: More updates and improvements for work done</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}} to set GENTOO_MIRRORS and edit other env vars; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|# emerge --sync}}<br />
*#**# Double-check system profile (e.g., amd64 systemd, rv64_lp64d systemd) is correct {{bc|# eselect profile list}}<br />
*#**# {{bc|# emerge @world}}<br />
*#**# Optionally, configure USE variable {{bc|# emerge --info <nowiki>|</nowiki> grep ^USE}} {{bc|# nano -w /etc/portage/make.conf}}<br />
*#**# Configure timezone {{bc|# echo "Region/Location" > /etc/timezone}} {{bc|# emerge --config sys-libs/timezone-data}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Add desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1'; remove unnecessary entries {{bc|# locale-gen}}<br />
*#**# Select locale {{bc|# nano -w /etc/env.d/02locale}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"' {{bc|# env-update && source /etc/profile}}<br />
*#**# Optionally, set {{ic|/etc/conf.d/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# emerge --sync}} {{bc|# emerge @world}}<br />
*#* Arch:<br />
*#** Create a subdirectory for the chroot installation; install base, base-devel and a basic editor {{bc|$ sudo pacstrap -c ./subdir/ base base-devel nano}}<br />
*#** Configure:<br />
*#**# Copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Create bind mountpoint for subdir {{bc|$ sudo mount --bind ./subdir/ ./subdir/}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile}}<br />
*#**# Optionally, update default prompt {{bc|# export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure timezone {{bc|# ln -sf /usr/share/zoneinfo/Region/Location /etc/localtime}}<br />
*#**# Configure locales {{bc|# nano -w /etc/locale.gen}} Uncomment desired locales, e.g., 'en_US.UTF-8 UTF-8', 'en_US ISO-8859-1' {{bc|# locale-gen}} {{bc|# nano -w /etc/locale.conf}} Add, e.g., 'LANG<nowiki>=</nowiki>"en_US.UTF-8"' and 'LC_COLLATE<nowiki>=</nowiki>"C"'<br />
*#**# Optionally, update system {{bc|# pacman -Syu}}<br />
*#**# Optionally, set {{ic|/etc/hostname}}; add 'hostname<nowiki>=</nowiki>"system_name"'<br />
*#**# Exit chroot environment {{bc|# exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|# pacman -Syu}}<br />
*# Test running rv64_lp64d/riscv64 installations via chroot (or systemd-nspawn) using same installation and configuration procedures<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Chroot&diff=651811Chroot2021-02-10T13:31:03Z<p>Cmsigler: Minor correction</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:System recovery]]<br />
[[Category:Sandboxing]]<br />
[[Category:Commands]]<br />
[[de:Chroot]]<br />
[[es:Chroot]]<br />
[[fa:تغییر ریشه]]<br />
[[fr:Chroot]]<br />
[[ja:Chroot]]<br />
[[pt:Chroot]]<br />
[[ru:Chroot]]<br />
[[zh-hans:Chroot]]<br />
{{Related articles start}}<br />
{{Related|PRoot}}<br />
{{Related|Linux Containers}}<br />
{{Related|systemd-nspawn}}<br />
{{Related articles end}}<br />
A [[Wikipedia:Chroot|chroot]] is an operation that changes the apparent root directory for the current running process and their children. A program that is run in such a modified environment cannot access files and commands outside that environmental directory tree. This modified environment is called a ''chroot jail''.<br />
<br />
== Reasoning ==<br />
<br />
Changing root is commonly done for performing system maintenance on systems where booting and/or logging in is no longer possible. Common examples are:<br />
<br />
* Reinstalling the [[bootloader]].<br />
* Rebuilding the [[mkinitcpio|initramfs image]].<br />
* Upgrading or [[downgrading packages]].<br />
* Resetting a [[Password recovery|forgotten password]].<br />
* Building packages in a clean chroot, see [[DeveloperWiki:Building in a clean chroot]].<br />
<br />
See also [[Wikipedia:Chroot#Limitations]].<br />
<br />
== Requirements ==<br />
<br />
* Root privilege.<br />
* Another Linux environment, e.g. a LiveCD or USB flash media, or from another existing Linux distribution.<br />
* Matching architecture environments; i.e. the chroot from and chroot to. The architecture of the current environment can be discovered with: {{ic|uname -m}} (e.g. i686 or x86_64).<br />
* Kernel modules loaded that are needed in the chroot environment.<br />
* [[Swap]] enabled if needed: {{bc|# swapon /dev/sd''xY''}}<br />
* Internet connection established if needed.<br />
<br />
== Usage ==<br />
<br />
{{Note|<br />
* Some [[systemd]] tools such as ''hostnamectl'', ''localectl'' and ''timedatectl'' can not be used inside a chroot, as they require an active [[dbus]] connection. [https://github.com/systemd/systemd/issues/798#issuecomment-126568596]<br />
* The file system that will serve as the new root ({{ic|/}}) of your chroot must be accessible (i.e., decrypted, mounted).<br />
}}<br />
<br />
There are two main options for using chroot, described below.<br />
<br />
=== Using arch-chroot ===<br />
<br />
The bash script {{ic|arch-chroot}} is part of the {{Pkg|arch-install-scripts}} package. Before it runs {{ic|/usr/bin/chroot}}, the script mounts API filesystems like {{ic|/proc}} and makes {{ic|/etc/resolv.conf}} available from the chroot.<br />
<br />
==== Enter a chroot ====<br />
<br />
Run arch-chroot with the new root directory as first argument:<br />
<br />
# arch-chroot ''/location/of/new/root''<br />
<br />
For example, in the [[installation guide]] this directory would be {{ic|/mnt}}:<br />
<br />
# arch-chroot /mnt<br />
<br />
To exit the chroot simply use:<br />
<br />
# exit<br />
<br />
==== Run a single command and exit ====<br />
<br />
To run a command from the chroot, and exit again append the command to the end of the line:<br />
<br />
# arch-chroot ''/location/of/new/root'' ''mycommand''<br />
<br />
For example, to run {{ic|mkinitcpio -p linux}} for a chroot located at {{ic|/mnt/arch}} do:<br />
<br />
# arch-chroot /mnt/arch mkinitcpio -p linux<br />
<br />
=== Using chroot ===<br />
<br />
{{Warning|When using {{ic|--rbind}}, some subdirectories of {{ic|dev/}} and {{ic|sys/}} will not be unmountable. Attempting to unmount with {{ic|umount -l}} in this situation will break your session, requiring a reboot. If possible, use {{ic|-o bind}} instead.}}<br />
<br />
In the following example {{ic|''/location/of/new/root''}} is the directory where the new root resides.<br />
<br />
First, mount the temporary API filesystems:<br />
<br />
# cd ''/location/of/new/root''<br />
# mount -t proc /proc proc/<br />
# mount -t sysfs /sys sys/<br />
# mount --rbind /dev dev/<br />
<br />
And optionally:<br />
<br />
# mount --rbind /run run/<br />
<br />
If you are running a UEFI system you will also need access to EFI variables. Otherwise, when installing GRUB you will receive a message similar to: {{ic|UEFI variables not supported on this machine}}:<br />
<br />
# mount --rbind /sys/firmware/efi/efivars sys/firmware/efi/efivars/<br />
<br />
Next, in order to use an internet connection in the chroot environment copy over the DNS details:<br />
<br />
# cp /etc/resolv.conf etc/resolv.conf<br />
<br />
Finally, to change root into {{ic|''/location/of/new/root''}} using a bash shell:<br />
<br />
# chroot ''/location/of/new/root'' /bin/bash<br />
<br />
{{Note|If you see the error:<br />
* {{ic|chroot: cannot run command '/usr/bin/bash': Exec format error}}, it is likely that the architectures of the host environment and chroot environment do not match.<br />
* {{ic|chroot: '/usr/bin/bash': permission denied}}, remount with the execute permission: {{ic|mount -o remount,exec ''/location/of/new/root''}}.<br />
** if checking this didn't help, then [https://www.tldp.org/LDP/LG/issue52/okopnik.html make sure] the base components of the new enviroment are intact (if it's an Arch root try {{ic|1=paccheck --root=''/location/of/new/root'' --files --file-properties --md5sum glibc filesystem}}, from {{Pkg|pacutils}})<br />
}}<br />
<br />
After chrooting it may be necessary to load the local bash configuration:<br />
<br />
# source /etc/profile<br />
# source ~/.bashrc<br />
<br />
{{Tip|Optionally, create a unique prompt to be able to differentiate your chroot environment:<br />
{{bc|1=# export PS1="(chroot) $PS1"}}<br />
}}<br />
<br />
When finished with the chroot, you can exit it via:<br />
<br />
# exit<br />
<br />
Then unmount the temporary file systems:<br />
<br />
# cd /<br />
# umount --recursive ''/location/of/new/root''<br />
<br />
{{Note|If there is an error mentioning something like: {{ic|umount: /path: device is busy}} this usually means that either: a program (even a shell) was left running in the chroot or that a sub-mount still exists. Quit the program and use {{ic|findmnt -R ''/location/of/new/root''}} to find and then {{ic|umount}} sub-mounts. It may be tricky to {{ic|umount}} some things and one can hopefully have {{ic|umount --force}} work, as a last resort use {{ic|umount --lazy}} which just releases them. In either case to be safe, {{ic|reboot}} as soon as possible if these are unresolved to avoid possible future conflicts.}}<br />
<br />
== Run graphical applications from chroot ==<br />
<br />
If you have an [[X server]] running on your system, you can start graphical applications from the chroot environment.<br />
<br />
To allow the chroot environment to connect to an X server, open a virtual terminal inside the X server (i.e. inside the desktop of the user that is currently logged in), then run the [[xhost]] command, which gives permission to anyone to connect to the user's X server (see also [[Xhost]]):<br />
<br />
$ xhost +local:<br />
<br />
Then, to direct the applications to the X server from chroot, set the DISPLAY environment variable inside the chroot to match the DISPLAY variable of the user that owns the X server. So for example, run:<br />
<br />
$ echo $DISPLAY<br />
<br />
as the user that owns the X server to see the value of DISPLAY. If the value is ":0" (for example), then in the chroot environment run:<br />
<br />
# export DISPLAY=:0<br />
<br />
== Without root privileges ==<br />
<br />
Chroot requires root privileges, which may not be desirable or possible for the user to obtain in certain situations. There are, however, various ways to simulate chroot-like behavior using alternative implementations.<br />
<br />
=== PRoot ===<br />
<br />
[[PRoot]] may be used to change the apparent root directory and use {{ic|mount --bind}} without root privileges. This is useful for confining applications to a single directory or running programs built for a different CPU architecture, but it has limitations due to the fact that all files are owned by the user on the host system. PRoot provides a {{ic|--root-id}} argument that can be used as a workaround for some of these limitations in a similar (albeit more limited) manner to ''fakeroot''.<br />
<br />
=== Fakechroot ===<br />
<br />
{{Pkg|fakechroot}} is a library shim which intercepts the chroot call and fakes the results. It can be used in conjunction with {{Pkg|fakeroot}} to simulate a chroot as a regular user. <br />
<br />
# fakechroot fakeroot chroot ~/my-chroot bash<br />
<br />
== Troubleshooting ==<br />
<br />
=== arch-chroot: ''/location/of/new/root'' is not a mountpoint. This may have undesirable side effects. ===<br />
<br />
Upon executing {{ic|arch-chroot ''/location/of/new/root''}} a warning is issued:<br />
<br />
{{ic|<nowiki>==</nowiki>> WARNING: ''/location/of/new/root'' is not a mountpoint. This may have undesirable side effects.}}<br />
<br />
See [https://man.archlinux.org/man/arch-chroot.8 arch-chroot(8)] for an explanation and an example of using bind mounting to make the chroot directory a mountpoint.<br />
<br />
== See also ==<br />
<br />
* [https://help.ubuntu.com/community/BasicChroot Basic Chroot]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Pacman&diff=651810Pacman2021-02-10T13:30:07Z<p>Cmsigler: Add fix for could not determine cachedir error</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Package manager]]<br />
[[Category:Arch projects]]<br />
[[Category:Commands]]<br />
[[ar:Pacman]]<br />
[[cs:Pacman]]<br />
[[da:Pacman]]<br />
[[de:Pacman]]<br />
[[el:Pacman]]<br />
[[es:Pacman]]<br />
[[fa:Pacman]]<br />
[[fi:Pacman]]<br />
[[fr:Pacman]]<br />
[[id:Pacman]]<br />
[[it:Pacman]]<br />
[[ja:Pacman]]<br />
[[ko:Pacman]]<br />
[[nl:Pacman]]<br />
[[pl:Pacman]]<br />
[[pt:Pacman]]<br />
[[ru:Pacman]]<br />
[[sr:Pacman]]<br />
[[sv:Pacman]]<br />
[[zh-hans:Pacman]]<br />
[[zh-hant:Pacman]]<br />
{{Related articles start}}<br />
{{Related|Creating packages}}<br />
{{Related|Downgrading packages}}<br />
{{Related|pacman/Package signing}}<br />
{{Related|pacman/Pacnew and Pacsave}}<br />
{{Related|pacman/Restore local database}}<br />
{{Related|pacman/Rosetta}}<br />
{{Related|pacman/Tips and tricks}}<br />
{{Related|FAQ#Package management}}<br />
{{Related|System maintenance}}<br />
{{Related|Arch Build System}}<br />
{{Related|Official repositories}}<br />
{{Related|Arch User Repository}}<br />
{{Related articles end}}<br />
<br />
The [https://archlinux.org/pacman/ pacman] [[Wikipedia:Package manager|package manager]] is one of the major distinguishing features of Arch Linux. It combines a simple binary package format with an easy-to-use [[Arch Build System|build system]]. The goal of ''pacman'' is to make it possible to easily manage packages, whether they are from the [[official repositories]] or the user's own builds.<br />
<br />
''Pacman'' keeps the system up to date by synchronizing package lists with the master server. This server/client model also allows the user to download/install packages with a simple command, complete with all required dependencies.<br />
<br />
''Pacman'' is written in the [[C]] programming language and uses the {{man|1|bsdtar}} [[w:tar (computing)|tar]] format for packaging.<br />
<br />
{{Tip|1=The {{Pkg|pacman}} package contains tools such as [[makepkg]] and {{man|8|vercmp}}. Other useful tools such as [[#Pactree|pactree]] and [[checkupdates]] are found in {{Pkg|pacman-contrib}} ([https://git.archlinux.org/pacman.git/commit/?id=0c99eabd50752310f42ec808c8734a338122ec86 formerly] part of pacman). Run {{ic|pacman -Ql pacman pacman-contrib {{!}} grep -E 'bin/.+'}} to see the full list.}}<br />
<br />
== Usage ==<br />
<br />
What follows is just a small sample of the operations that ''pacman'' can perform. To read more examples, refer to {{man|8|pacman}}.<br />
<br />
{{Tip|For those who have used other Linux distributions before, there is a helpful [[Pacman Rosetta]] article.}}<br />
<br />
=== Installing packages ===<br />
<br />
A package is an archive containing:<br />
<br />
* all of the (compiled) files of an application<br />
* metadata about the application, such as application name, version, dependencies, ...<br />
* installation files and directives for pacman<br />
* (optionally) extra files to make your life easier, such as a start/stop script<br />
<br />
Arch's package manager pacman can install, update, and remove those packages. Using packages instead of compiling and installing programs yourself has various benefits:<br />
<br />
* easily updatable: pacman will update existing packages as soon as updates are available<br />
* dependency checks: pacman handles dependencies for you, you only need to specify the program and pacman installs it together with every other program it needs<br />
* clean removal: pacman has a list of every file in a package; this way, no files are unintentionally left behind when you decide to remove a package.<br />
<br />
{{Note|<br />
* Packages often have [[PKGBUILD#optdepends|optional dependencies]] which are packages that provide additional functionality to the application but not strictly required for running it. When installing a package, ''pacman'' will list a package's optional dependencies, but they will not be found in {{ic|pacman.log}}. Use the [[#Querying package databases]] command to view the optional dependencies of a package.<br />
* When installing a package which you require only as (optional) dependency of some other package (i.e. not required by you explicitly otherwise), it is recommended to use {{ic|--asdeps}} option. For details see the [[#Installation reason]] section.}}<br />
<br />
{{Warning|1=When installing packages in Arch, avoid refreshing the package list without [[#Upgrading packages|upgrading the system]] (for example, when a [[#Packages cannot be retrieved on installation|package is no longer found]] in the official repositories). In practice, do '''not''' run {{ic|pacman -Sy ''package_name''}} instead of {{ic|pacman -Sy'''u''' ''package_name''}}, as this could lead to dependency issues. See [[System maintenance#Partial upgrades are unsupported]] and [https://bbs.archlinux.org/viewtopic.php?id=89328 BBS#89328].}}<br />
<br />
==== Installing specific packages ====<br />
<br />
To install a single package or list of packages, including dependencies, issue the following command:<br />
<br />
# pacman -S ''package_name1'' ''package_name2'' ...<br />
<br />
To install a list of packages with regex (see [https://bbs.archlinux.org/viewtopic.php?id=7179 this forum thread]):<br />
<br />
# pacman -S $(pacman -Ssq ''package_regex'')<br />
<br />
Sometimes there are multiple versions of a package in different repositories (e.g. ''extra'' and ''testing''). To install the version from the ''extra'' repository in this example, the repository needs to be defined in front of the package name:<br />
<br />
# pacman -S extra/''package_name''<br />
<br />
To install a number of packages sharing similar patterns in their names one can use curly brace expansion. For example:<br />
<br />
# pacman -S plasma-{desktop,mediacenter,nm}<br />
<br />
This can be expanded to however many levels needed:<br />
<br />
# pacman -S plasma-{workspace{,-wallpapers},pa}<br />
<br />
===== Virtual packages =====<br />
<br />
A virtual package is a special package which does not exist by itself, but is [[PKGBUILD#provides|provided]] by one or more other packages. Virtual packages allow other packages to not name a specific package as a dependency, in case there are several candidates. Virtual packages cannot be installed by their name, instead they become installed in your system when you have install a package ''providing'' the virtual package.<br />
<br />
==== Installing package groups ====<br />
<br />
Some packages belong to a [[Package group|group of packages]] that can all be installed simultaneously. For example, issuing the command:<br />
<br />
# pacman -S gnome<br />
<br />
will prompt you to select the packages from the {{Grp|gnome}} group that you wish to install.<br />
<br />
Sometimes a package group will contain a large amount of packages, and there may be only a few that you do or do not want to install. Instead of having to enter all the numbers except the ones you do not want, it is sometimes more convenient to select or exclude packages or ranges of packages with the following syntax:<br />
<br />
Enter a selection (default=all): 1-10 15<br />
<br />
which will select packages 1 through 10 and 15 for installation, or:<br />
<br />
Enter a selection (default=all): ^5-8 ^2<br />
<br />
which will select all packages except 5 through 8 and 2 for installation.<br />
<br />
To see what packages belong to the gnome group, run:<br />
<br />
# pacman -Sg gnome<br />
<br />
Also visit https://archlinux.org/groups/ to see what package groups are available.<br />
<br />
{{Note|If a package in the list is already installed on the system, it will be reinstalled even if it is already up to date. This behavior can be overridden with the {{ic|--needed}} option.}}<br />
<br />
=== Removing packages ===<br />
<br />
To remove a single package, leaving all of its dependencies installed:<br />
<br />
# pacman -R ''package_name''<br />
<br />
To remove a package and its dependencies which are not required by any other installed package:<br />
<br />
# pacman -Rs ''package_name''<br />
<br />
The above may sometimes refuse to run when removing a group which contains otherwise needed packages. In this case try:<br />
<br />
# pacman -Rsu ''package_name''<br />
<br />
To remove a package, its dependencies and all the packages that depend on the target package:<br />
<br />
{{Warning|This operation is recursive, and must be used with care since it can remove many potentially needed packages.}}<br />
<br />
# pacman -Rsc ''package_name''<br />
<br />
To remove a package, which is required by another package, without removing the dependent package:<br />
<br />
{{Warning|The following operation can break a system and should be avoided. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
# pacman -Rdd ''package_name''<br />
<br />
''Pacman'' saves important configuration files when removing certain applications and names them with the extension: ''.pacsave''. To prevent the creation of these backup files use the {{ic|-n}} option:<br />
<br />
# pacman -Rn ''package_name''<br />
<br />
{{Note|''Pacman'' will not remove configurations that the application itself creates (for example "dotfiles" in the home folder).}}<br />
<br />
=== Upgrading packages ===<br />
<br />
{{Warning|<br />
*Users are expected to follow the guidance in the [[System maintenance#Upgrading the system]] section to upgrade their systems regularly and not blindly run the following command.<br />
*Arch only supports full system upgrades. See [[System maintenance#Partial upgrades are unsupported]] and [[#Installing packages]] for details.}}<br />
<br />
''Pacman'' can update all packages on the system with just one command. This could take quite a while depending on how up-to-date the system is. The following command synchronizes the repository databases ''and'' updates the system's packages, excluding "local" packages that are not in the configured repositories:<br />
<br />
# pacman -Syu<br />
<br />
=== Querying package databases ===<br />
<br />
''Pacman'' queries the local package database with the {{ic|-Q}} flag, the sync database with the {{ic|-S}} flag and the files database with the {{ic|-F}} flag. See {{ic|pacman -Q --help}}, {{ic|pacman -S --help}} and {{ic|pacman -F --help}} for the respective suboptions of each flag.<br />
<br />
''Pacman'' can search for packages in the database, searching both in packages' names and descriptions:<br />
<br />
$ pacman -Ss ''string1'' ''string2'' ...<br />
<br />
Sometimes, {{Ic|-s}}'s builtin ERE (Extended Regular Expressions) can cause a lot of unwanted results, so it has to be limited to match the package name only; not the description nor any other field:<br />
<br />
$ pacman -Ss '^vim-'<br />
<br />
To search for already installed packages:<br />
<br />
$ pacman -Qs ''string1'' ''string2'' ...<br />
<br />
To search for package file names in remote packages:<br />
<br />
$ pacman -F ''string1'' ''string2'' ...<br />
<br />
To display extensive information about a given package:<br />
<br />
$ pacman -Si ''package_name''<br />
<br />
For locally installed packages:<br />
<br />
$ pacman -Qi ''package_name''<br />
<br />
Passing two {{ic|-i}} flags will also display the list of backup files and their modification states:<br />
<br />
$ pacman -Qii ''package_name''<br />
<br />
To retrieve a list of the files installed by a package:<br />
<br />
$ pacman -Ql ''package_name''<br />
<br />
To retrieve a list of the files installed by a remote package:<br />
<br />
$ pacman -Fl ''package_name''<br />
<br />
To verify the presence of the files installed by a package:<br />
<br />
$ pacman -Qk ''package_name''<br />
<br />
Passing the {{ic|k}} flag twice will perform a more thorough check.<br />
<br />
To query the database to know which package a file in the file system belongs to:<br />
<br />
$ pacman -Qo ''/path/to/file_name''<br />
<br />
To query the database to know which remote package a file belongs to:<br />
<br />
$ pacman -F ''/path/to/file_name''<br />
<br />
To list all packages no longer required as dependencies (orphans):<br />
<br />
$ pacman -Qdt<br />
<br />
{{Tip|Add the above command to a pacman post-transaction [[#Hooks|hook]] to be notified if a transaction orphaned a package. This can be useful for being notified when a package has been dropped from a repository, since any dropped package will also be orphaned on a local installation (unless it was explicitly installed). To avoid any "failed to execute command" errors when no orphans are found, use the following command for {{ic|Exec}} in your hook: {{ic|<nowiki>/usr/bin/bash -c "/usr/bin/pacman -Qtd || /usr/bin/echo '=> None found.'"</nowiki>}}}}<br />
<br />
To list all packages explicitly installed and not required as dependencies:<br />
<br />
$ pacman -Qet<br />
<br />
See [[Pacman/Tips and tricks]] for more examples.<br />
<br />
==== Pactree ====<br />
{{Note|{{man|8|pactree}} is not part of the {{Pkg|pacman}} package anymore. Instead it can be found in {{Pkg|pacman-contrib}}.}}<br />
<br />
To view the dependency tree of a package:<br />
<br />
$ pactree ''package_name''<br />
<br />
To view the dependant tree of a package, pass the reverse flag {{ic|-r}} to ''pactree'', or use ''whoneeds'' from {{AUR|pkgtools}}.<br />
<br />
==== Database structure ====<br />
<br />
The ''pacman'' databases are normally located at {{ic|/var/lib/pacman/sync}}. For each repository specified in {{ic|/etc/pacman.conf}} there will be a corresponding database file located there. Database files are gzipped tar archives containing one directory for each package, for example for the {{Pkg|which}} package:<br />
<br />
{{hc|$ tree which-2.21-5|<br />
which-2.21-5<br />
{{!}}-- desc<br />
}}<br />
<br />
The {{ic|desc}} file contains meta data such as the package description, dependencies, file size and MD5 hash.<br />
<br />
=== Cleaning the package cache ===<br />
<br />
''Pacman'' stores its downloaded packages in {{ic|/var/cache/pacman/pkg/}} and does not remove the old or uninstalled versions automatically. This has some advantages:<br />
# It allows to [[downgrade]] a package without the need to retrieve the previous version through other means, such as the [[Arch Linux Archive]].<br />
# A package that has been uninstalled can easily be reinstalled directly from the cache folder, not requiring a new download from the repository.<br />
<br />
However, it is necessary to deliberately clean up the cache periodically to prevent the folder to grow indefinitely in size.<br />
<br />
The {{man|8|paccache}} script, provided within the {{Pkg|pacman-contrib}} package, deletes all cached versions of installed and uninstalled packages, except for the most recent 3, by default:<br />
<br />
# paccache -r<br />
<br />
[[Enable]] and [[start]] {{ic|paccache.timer}} to discard unused packages weekly.<br />
<br />
{{Tip|1=You can create a [[#Hooks|hook]] to run this automatically after every pacman transaction, see [https://bbs.archlinux.org/viewtopic.php?pid=1694743#p1694743 examples] and {{AUR|pacman-cleanup-hook}}.}}<br />
<br />
You can also define how many recent versions you want to keep. To retain only one past version use:<br />
<br />
# paccache -rk1<br />
<br />
Add the {{ic|-u}}/{{ic|--uninstalled}} switch to limit the action of ''paccache'' to uninstalled packages. For example to remove all cached versions of uninstalled packages, use the following:<br />
<br />
# paccache -ruk0<br />
<br />
Or you can combine it for both installed and uninstalled packages. For example to keep latest two of installed packages but remove all cached versions of uninstalled packages, use the following:<br />
<br />
# paccache -rk2 -ruk0<br />
<br />
See {{ic|paccache -h}} for more options.<br />
<br />
''Pacman'' also has some built-in options to clean the cache and the leftover database files from repositories which are no longer listed in the configuration file {{ic|/etc/pacman.conf}}. However ''pacman'' does not offer the possibility to keep a number of past versions and is therefore more aggressive than ''paccache'' default options.<br />
<br />
To remove all the cached packages that are not currently installed, and the unused sync database, execute:<br />
<br />
# pacman -Sc<br />
<br />
To remove all files from the cache, use the clean switch twice, this is the most aggressive approach and will leave nothing in the cache folder:<br />
<br />
# pacman -Scc<br />
<br />
{{Warning|One should avoid deleting from the cache all past versions of installed packages and all uninstalled packages unless one desperately needs to free some disk space. This will prevent downgrading or reinstalling packages without downloading them again.}}<br />
<br />
{{AUR|pkgcacheclean}} and {{AUR|pacleaner}} are two further alternatives to clean the cache.<br />
<br />
=== Additional commands ===<br />
<br />
Download a package without installing it:<br />
<br />
# pacman -Sw ''package_name''<br />
<br />
Install a 'local' package that is not from a remote repository (e.g. the package is from the [[AUR]]):<br />
<br />
# pacman -U ''/path/to/package/package_name-version.pkg.tar.zst''<br />
<br />
To keep a copy of the local package in ''pacman'''s cache, use:<br />
<br />
# pacman -U file:///''path/to/package/package_name-version.pkg.tar.zst''<br />
<br />
Install a 'remote' package (not from a repository stated in ''pacman'''s configuration files):<br />
<br />
# pacman -U ''<nowiki>http://www.example.com/repo/example.pkg.tar.zst</nowiki>''<br />
<br />
To inhibit the {{ic|-S}}, {{ic|-U}} and {{ic|-R}} actions, {{ic|-p}} can be used.<br />
<br />
''Pacman'' always lists packages to be installed or removed and asks for permission before it takes action.<br />
<br />
=== Installation reason ===<br />
<br />
The ''pacman'' database distinguishes the installed packages in two groups according to the reason why they were installed:<br />
<br />
* '''explicitly-installed''': the packages that were literally passed to a generic ''pacman'' {{ic|-S}} or {{ic|-U}} command;<br />
* '''dependencies''': the packages that, despite never (in general) having been passed to a ''pacman'' installation command, were implicitly installed because [[dependency|required]] by another package that was explicitly installed.<br />
<br />
When installing a package, it is possible to force its installation reason to ''dependency'' with:<br />
<br />
# pacman -S --asdeps ''package_name''<br />
<br />
{{Tip|Installing optional dependencies with {{ic|--asdeps}} will cause it such that if you [[Pacman/Tips_and_tricks#Removing_unused_packages_.28orphans.29|remove orphans]], ''pacman'' will also remove leftover optional dependencies.}}<br />
<br />
When '''re'''installing a package, though, the current installation reason is preserved by default.<br />
<br />
The list of explicitly-installed packages can be shown with {{ic|pacman -Qe}}, while the complementary list of dependencies can be shown with {{ic|pacman -Qd}}.<br />
<br />
To change the installation reason of an already installed package, execute:<br />
<br />
# pacman -D --asdeps ''package_name''<br />
<br />
Use {{ic|--asexplicit}} to do the opposite operation.<br />
<br />
{{Note|Using {{ic|--asdeps}} and {{ic|--asexplicit}} options when upgrading, such as with {{ic|pacman -Syu ''package_name'' --asdeps}}, is discouraged. This would change the installation reason of not only the package being installed, but also the packages being upgraded.}}<br />
<br />
=== Search for a package that contains a specific file ===<br />
<br />
Sync the files database:<br />
<br />
# pacman -Fy<br />
<br />
Search for a package containing a file, e.g.:<br />
<br />
{{hc|$ pacman -F pacman|<br />
core/pacman 5.2.1-1 (base base-devel) [installed]<br />
usr/bin/pacman<br />
usr/share/bash-completion/completions/pacman<br />
extra/xscreensaver 5.43-1<br />
usr/lib/xscreensaver/pacman<br />
}}<br />
<br />
{{Tip|You can set a cron job or a systemd timer to sync the files database regularly.}}<br />
<br />
For advanced functionality install [[pkgfile]], which uses a separate database with all files and their associated packages.<br />
<br />
== Configuration ==<br />
<br />
''Pacman'''s settings are located in {{ic|/etc/pacman.conf}}: this is the place where the user configures the program to work in the desired manner. In-depth information about the configuration file can be found in {{man|5|pacman.conf}}.<br />
<br />
=== General options ===<br />
<br />
General options are in the {{ic|[options]}} section. Read {{man|5|pacman.conf}} or look in the default {{ic|pacman.conf}} for information on what can be done here.<br />
<br />
==== Comparing versions before updating ====<br />
<br />
To see old and new versions of available packages, uncomment the "VerbosePkgLists" line in {{ic|/etc/pacman.conf}}. The output of {{ic|pacman -Syu}} will be like this:<br />
<br />
Package (6) Old Version New Version Net Change Download Size<br />
<br />
extra/libmariadbclient 10.1.9-4 10.1.10-1 0.03 MiB 4.35 MiB<br />
extra/libpng 1.6.19-1 1.6.20-1 0.00 MiB 0.23 MiB<br />
extra/mariadb 10.1.9-4 10.1.10-1 0.26 MiB 13.80 MiB<br />
<br />
==== Skip package from being upgraded ====<br />
<br />
{{Warning|Be careful in skipping packages, since [[partial upgrades]] are unsupported.}}<br />
<br />
To have a specific package skipped when [[#Upgrading packages|upgrading]] the system, specify it as such:<br />
<br />
IgnorePkg=linux<br />
<br />
For multiple packages use a space-separated list, or use additional {{ic|IgnorePkg}} lines. Also, [[Wikipedia:glob (programming)|glob]] patterns can be used. If you want to skip packages just once, you can also use the {{ic|--ignore}} option on the command-line - this time with a comma-separated list.<br />
<br />
It will still be possible to upgrade the ignored packages using {{ic|pacman -S}}: in this case ''pacman'' will remind you that the packages have been included in an {{ic|IgnorePkg}} statement.<br />
<br />
==== Skip package group from being upgraded ====<br />
<br />
{{Warning|Be careful in skipping package groups, since [[partial upgrades]] are unsupported.}}<br />
<br />
As with packages, skipping a whole package group is also possible:<br />
<br />
IgnoreGroup=gnome<br />
<br />
==== Skip file from being upgraded ====<br />
<br />
All files listed with a {{Ic|NoUpgrade}} directive will never be touched during a package install/upgrade, and the new files will be installed with a ''.pacnew'' extension.<br />
<br />
NoUpgrade=''path/to/file''<br />
<br />
{{Note|The path refers to files in the package archive. Therefore, do not include the leading slash.}}<br />
<br />
==== Skip files from being installed to system ====<br />
<br />
To always skip installation of specific directories list them under {{Ic|NoExtract}}. For example, to avoid installation of [[systemd]] units use this:<br />
<br />
NoExtract=usr/lib/systemd/system/*<br />
<br />
Later rules override previous ones, and you can negate a rule by prepending {{ic|!}}.<br />
<br />
{{Tip|''Pacman'' issues warning messages about missing locales when updating a package for which locales have been cleared by ''localepurge'' or ''bleachbit''. Commenting the {{ic|CheckSpace}} option in {{ic|pacman.conf}} suppresses such warnings, but consider that the space-checking functionality will be disabled for all packages.}}<br />
<br />
==== Maintain several configuration files ====<br />
<br />
If you have several configuration files (e.g. main configuration and configuration with [[testing]] repository enabled) and would have to share options between configurations you may use {{ic|Include}} option declared in the configuration files, e.g.:<br />
<br />
Include = ''/path/to/common/settings''<br />
<br />
where {{ic|''/path/to/common/settings''}} file contains the same options for both configurations.<br />
<br />
==== Hooks ====<br />
<br />
''Pacman'' can run pre- and post-transaction hooks from the {{ic|/usr/share/libalpm/hooks/}} directory; more directories can be specified with the {{ic|HookDir}} option in {{ic|pacman.conf}}, which defaults to {{ic|/etc/pacman.d/hooks}}. Hook file names must be suffixed with ''.hook''. Pacman hooks are not interactive.<br />
<br />
''Pacman'' hooks are used, for example, in combination with {{ic|systemd-sysusers}} and {{ic|systemd-tmpfiles}} to automatically create system users and files during the installation of packages. For example, {{Pkg|tomcat8}} specifies that it wants a system user called {{ic|tomcat8}} and certain directories owned by this user. The ''pacman'' hooks {{ic|systemd-sysusers.hook}} and {{ic|systemd-tmpfiles.hook}} invoke {{ic|systemd-sysusers}} and {{ic|systemd-tmpfiles}} when ''pacman'' determines that {{Pkg|tomcat8}} contains files specifying users and tmp files.<br />
<br />
For more information on alpm hooks, see {{man|5|alpm-hooks}}.<br />
<br />
=== Repositories and mirrors ===<br />
<br />
Besides the special [[#General options|[options]]] section, each other {{ic|[section]}} in {{ic|pacman.conf}} defines a package repository to be used. A ''repository'' is a ''logical'' collection of packages, which are ''physically'' stored on one or more servers: for this reason each server is called a ''mirror'' for the repository.<br />
<br />
Repositories are distinguished between [[Official repositories|official]] and [[Unofficial user repositories|unofficial]]. The order of repositories in the configuration file matters; repositories listed first will take precedence over those listed later in the file when packages in two repositories have identical names, regardless of version number. In order to use a repository after adding it, you will need to [[#Upgrading packages|upgrade]] the whole system first.<br />
<br />
Each repository section allows defining the list of its mirrors directly or in a dedicated external file through the {{ic|Include}} directive: for example, the mirrors for the official repositories are included from {{ic|/etc/pacman.d/mirrorlist}}. See the [[Mirrors]] article for mirror configuration.<br />
<br />
==== Package security ====<br />
<br />
''Pacman'' supports package signatures, which add an extra layer of security to the packages. The default configuration, {{ic|1=SigLevel = Required DatabaseOptional}}, enables signature verification for all the packages on a global level: this can be overridden by per-repository {{ic|SigLevel}} lines. For more details on package signing and signature verification, take a look at [[pacman-key]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== "Failed to commit transaction (conflicting files)" error ===<br />
<br />
If you see the following error: [https://bbs.archlinux.org/viewtopic.php?id=56373]<br />
<br />
error: could not prepare transaction<br />
error: failed to commit transaction (conflicting files)<br />
''package'': ''/path/to/file'' exists in filesystem<br />
Errors occurred, no packages were upgraded.<br />
<br />
This is happening because ''pacman'' has detected a file conflict, and by design, will not overwrite files for you. This is by design, not a flaw.<br />
<br />
The problem is usually trivial to solve. A safe way is to first check if another package owns the file ({{ic|pacman -Qo ''/path/to/file''}}). If the file is owned by another package, [[Reporting bug guidelines|file a bug report]]. If the file is not owned by another package, rename the file which 'exists in filesystem' and re-issue the update command. If all goes well, the file may then be removed.<br />
<br />
If you had installed a program manually without using ''pacman'', for example through {{ic|make install}}, you have to remove/uninstall this program with all of its files. See also [[Pacman tips#Identify files not owned by any package]].<br />
<br />
Every installed package provides a {{ic|/var/lib/pacman/local/''package-version''/files}} file that contains metadata about this package. If this file gets corrupted, is empty or goes missing, it results in {{ic|file exists in filesystem}} errors when trying to update the package. Such an error usually concerns only one package. Instead of manually renaming and later removing all the files that belong to the package in question, you may explicitly run {{ic|pacman -S --overwrite ''glob'' ''package''}} to force ''pacman'' to overwrite files that match {{ic|''glob''}}.<br />
<br />
{{Warning|Generally avoid using the {{ic|--overwrite}} switch. See [[System maintenance#Avoid certain pacman commands]].}}<br />
<br />
=== "Failed to commit transaction (invalid or corrupted package)" error ===<br />
<br />
Look for ''.part'' files (partially downloaded packages) in {{ic|/var/cache/pacman/pkg/}} and remove them (often caused by usage of a custom {{ic|XferCommand}} in {{ic|pacman.conf}}).<br />
<br />
# find /var/cache/pacman/pkg/ -iname "*.part" -delete<br />
<br />
=== "Failed to init transaction (unable to lock database)" error ===<br />
<br />
When ''pacman'' is about to alter the package database, for example installing a package, it creates a lock file at {{ic|/var/lib/pacman/db.lck}}. This prevents another instance of ''pacman'' from trying to alter the package database at the same time.<br />
<br />
If ''pacman'' is interrupted while changing the database, this stale lock file can remain. If you are certain that no instances of ''pacman'' are running then delete the lock file:<br />
<br />
# rm /var/lib/pacman/db.lck<br />
<br />
=== Packages cannot be retrieved on installation ===<br />
<br />
This error manifests as {{ic|Not found in sync db}}, {{ic|Target not found}} or {{ic|Failed retrieving file}}.<br />
<br />
Firstly, ensure the package actually exists. If certain the package exists, your package list may be out-of-date. Try running {{ic|pacman -Syu}} to force a refresh of all package lists and upgrade. Also make sure the selected [[mirrors]] are up-to-date and [[#Repositories and mirrors|repositories]] are correctly configured.<br />
<br />
It could also be that the repository containing the package is not enabled on your system, e.g. the package could be in the [[multilib]] repository, but ''multilib'' is not enabled in your {{ic|pacman.conf}}.<br />
<br />
See also [[FAQ#Why is there only a single version of each shared library in the official repositories?]].<br />
<br />
=== Pacman crashes during an upgrade ===<br />
<br />
In the case that ''pacman'' crashes with a "database write" error while removing packages, and reinstalling or upgrading packages fails thereafter, do the following:<br />
<br />
# Boot using the Arch installation media. Preferably use a recent media so that the ''pacman'' version matches/is newer than the system. <br />
# Mount the system's root filesystem, e.g., {{ic|mount /dev/sdaX /mnt}} as root, and check the mount has sufficient space with {{ic|df -h}}<br />
# Mount the proc, sys and dev filesystems as well: {{ic|mount -t proc proc /mnt/proc; mount --rbind /sys /mnt/sys; mount --rbind /dev /mnt/dev }} <br />
# If the system uses default database and directory locations, you can now update the system's ''pacman'' database and upgrade it via {{ic|1=pacman --sysroot /mnt -Syu}} as root.<br />
#* Alternatively, if you cannot update/upgrade, refer to [[Pacman/Tips and tricks#Reinstalling all packages]].<br />
# After the upgrade, one way to double-check for not upgraded but still broken packages: {{ic|find /mnt/usr/lib -size 0}} <br />
# Followed by a re-install of any still broken package via {{ic|1=pacman --sysroot /mnt -S ''package''}}.<br />
<br />
=== Manually reinstalling pacman ===<br />
<br />
{{Warning|It is extremely easy to break your system even worse using this approach. Use this only as a last resort if the method from [[#Pacman crashes during an upgrade]] is not an option.}}<br />
<br />
Even if ''pacman'' is terribly broken, you can fix it manually by downloading the latest packages and extracting them to the correct locations. The rough steps to perform are:<br />
<br />
# Determine the {{pkg|pacman}} dependencies to install<br />
# Download each package from a [[mirror]] of your choice<br />
# Extract each package to root<br />
# Reinstall these packages with {{ic|pacman -S --overwrite}} to update the package database accordingly<br />
# Do a full system upgrade<br />
<br />
If you have a healthy Arch system on hand, you can see the full list of dependencies with:<br />
<br />
$ pacman -Q $(pactree -u pacman)<br />
<br />
But you may only need to update a few of them depending on your issue. An example of extracting a package is<br />
<br />
# tar -xvpwf ''package.tar.zst'' -C / --exclude .PKGINFO --exclude .INSTALL --exclude .MTREE --exclude .BUILDINFO<br />
<br />
Note the use of the {{ic|w}} flag for interactive mode. Running non-interactively is very risky since you might end up overwriting an important file. Also take care to extract packages in the correct order (i.e. dependencies first). [https://bbs.archlinux.org/viewtopic.php?id=95007 This forum post] contains an example of this process where only a couple ''pacman'' dependencies are broken.<br />
<br />
=== "Unable to find root device" error after rebooting ===<br />
<br />
Most likely the [[initramfs]] became corrupted during a [[kernel]] update (improper use of ''pacman'''s {{ic|--overwrite}} option can be a cause). There are two options; first, try the ''Fallback'' entry.<br />
<br />
{{Tip|In case you removed the ''Fallback'' entry, you can always press the {{ic|Tab}} key when the boot loader menu shows up (for Syslinux) or {{ic|e}} (for GRUB or systemd-boot), rename it {{ic|initramfs-linux-fallback.img}} and press {{ic|Enter}} or {{ic|b}} (depending on your [[boot loader]]) to boot with the new parameters.}}<br />
<br />
Once the system starts, run this command (for the stock {{Pkg|linux}} kernel) either from the console or from a terminal to rebuild the initramfs image:<br />
<br />
# mkinitcpio -p linux<br />
<br />
If that does not work, from a current Arch release (CD/DVD or USB stick), [[mount]] your root and boot partitions. Then [[chroot]] using ''arch-chroot'':<br />
<br />
# arch-chroot /mnt<br />
# pacman -Syu mkinitcpio systemd linux<br />
<br />
{{Note|<br />
* If you do not have a current release or if you only have some other "live" Linux distribution laying around, you can [[chroot]] using the old fashioned way. Obviously, there will be more typing than simply running the {{ic|arch-chroot}} script.<br />
* If ''pacman'' fails with {{ic|Could not resolve host}}, please [[Network configuration#Check the connection|check your internet connection]].<br />
* If you cannot enter the arch-chroot or chroot environment but need to re-install packages you can use the command {{ic|pacman --sysroot /mnt -Syu foo bar}} to use ''pacman'' on your root partition.}}<br />
<br />
Reinstalling the kernel (the {{Pkg|linux}} package) will automatically re-generate the initramfs image with {{ic|mkinitcpio -p linux}}. There is no need to do this separately.<br />
<br />
Afterwards, it is recommended that you run {{ic|exit}}, {{ic|umount /mnt/{boot,} }} and {{ic|reboot}}.<br />
<br />
=== Signature from "User <email@example.org>" is unknown trust, installation failed ===<br />
<br />
Potential solutions:<br />
* Update the known keys, i.e. {{ic|pacman-key --refresh-keys}}<br />
* Manually upgrade {{Pkg|archlinux-keyring}} package first, i.e. {{ic|pacman -Sy archlinux-keyring && pacman -Su}}<br />
* Follow [[pacman-key#Resetting all the keys]]<br />
<br />
=== Request on importing PGP keys ===<br />
<br />
If installing Arch with an outdated ISO, you are likely prompted to import PGP keys. Agree to download the key to proceed. If you are unable to add the PGP key successfully, update the keyring or upgrade {{Pkg|archlinux-keyring}} (see [[#Signature from "User <email@example.org>" is unknown trust, installation failed|above]]).<br />
<br />
=== Error: key "0123456789ABCDEF" could not be looked up remotely ===<br />
<br />
If packages are signed with new keys, which were only recently added to {{Pkg|archlinux-keyring}}, these keys are not locally available during update (chicken-egg-problem). The installed {{Pkg|archlinux-keyring}} does not contain the key, until it is updated. Pacman tries to bypass this by a lookup through a key-server, which might not be possible e.g. behind proxys or firewalls and results in the stated error. Upgrade {{Pkg|archlinux-keyring}} first as shown [[#Signature from "User <email@example.org>" is unknown trust, installation failed|above]].<br />
<br />
=== Signature from "User <email@archlinux.org>" is invalid, installation failed ===<br />
<br />
When the system time is faulty, signing keys are considered expired (or invalid) and signature checks on packages will fail with the following error:<br />
<br />
error: ''package'': signature from "User <email@archlinux.org>" is invalid<br />
error: failed to commit transaction (invalid or corrupted package (PGP signature))<br />
Errors occured, no packages were upgraded.<br />
<br />
Make sure to correct the [[system time]], for example with {{ic|ntpd -qg}} run as root, and run {{ic|hwclock -w}} as root before subsequent installations or upgrades.<br />
<br />
=== "Warning: current locale is invalid; using default "C" locale" error ===<br />
<br />
As the error message says, your locale is not correctly configured. See [[Locale]].<br />
<br />
=== Pacman does not honor proxy settings ===<br />
<br />
Make sure that the relevant environment variables ({{ic|$http_proxy}}, {{ic|$ftp_proxy}} etc.) are set up. If you use ''pacman'' with [[sudo]], you need to configure sudo to [[sudo#Environment variables|pass these environment variables to pacman]]. Also, ensure the configuration of [[GnuPG#Use_a_keyserver|dirmngr]] has {{ic|honor-http-proxy}} in {{ic|/etc/pacman.d/gnupg/dirmngr.conf}} to honor the proxy when refreshing the keys.<br />
<br />
=== How do I reinstall all packages, retaining information on whether something was explicitly installed or as a dependency? ===<br />
<br />
To reinstall all the native packages: {{ic|pacman -Qnq {{!}} pacman -S -}} or {{ic|pacman -S $(pacman -Qnq)}} (the {{ic|-S}} option preserves the installation reason by default).<br />
<br />
You will then need to reinstall all the foreign packages, which can be listed with {{ic|pacman -Qmq}}.<br />
<br />
=== "Cannot open shared object file" error ===<br />
<br />
It looks like previous ''pacman'' transaction removed or corrupted shared libraries needed for ''pacman'' itself.<br />
<br />
To recover from this situation you need to unpack required libraries to your filesystem manually. First find what package contains the missed library and then locate it in the ''pacman'' cache ({{ic|/var/cache/pacman/pkg/}}). Unpack required shared library to the filesystem. This will allow to run ''pacman''.<br />
<br />
Now you need to [[#Installing specific packages|reinstall]] the broken package. Note that you need to use {{ic|--overwrite}} flag as you just unpacked system files and ''pacman'' does not know about it. ''Pacman'' will correctly replace our shared library file with one from package.<br />
<br />
That's it. Update the rest of the system.<br />
<br />
=== Freeze of package downloads ===<br />
<br />
Some issues have been reported regarding network problems that prevent ''pacman'' from updating/synchronizing repositories. [https://bbs.archlinux.org/viewtopic.php?id&#61;68944] [https://bbs.archlinux.org/viewtopic.php?id&#61;65728] When installing Arch Linux natively, these issues have been resolved by replacing the default ''pacman'' file downloader with an alternative (see [[Improve pacman performance]] for more details). When installing Arch Linux as a guest OS in [[VirtualBox]], this issue has also been addressed by using ''Host interface'' instead of ''NAT'' in the machine properties.<br />
<br />
=== Failed retrieving file 'core.db' from mirror ===<br />
<br />
If you receive this error message with correct [[mirrors]], try setting a different [[Resolv.conf|name server]].<br />
<br />
=== error: 'local-package.pkg.tar': permission denied ===<br />
<br />
If you want to install a package on an sshfs mount using {{ic|pacman -U}} and receive this error, move the package to a local directory and try to install again.<br />
<br />
=== error: could not determine cachedir mount point /var/cache/pacman/pkg ===<br />
<br />
Upon executing, e.g., {{ic|pacman -Syu}} inside a chroot environment an error is encountered:<br />
<br />
{{ic|error: could not determine cachedir mount point /var/cache/pacman/pkg}}<br />
{{ic|error: failed to commit transaction (not enough free disk space)}}<br />
<br />
This is frequently caused by the chroot directory not being a mountpoint when the chroot is entered. See [[Install_Arch_Linux_from_existing_Linux#Downloading_basic_tools|this note]] for a solution, and [https://man.archlinux.org/man/arch-chroot.8 arch-chroot(8)] for an explanation and an example of using bind mounting to make the chroot directory a mountpoint.<br />
<br />
== Understanding ==<br />
<br />
=== What happens during package install/upgrade/removal ===<br />
<br />
{{Accuracy|1=From [https://bbs.archlinux.org/viewtopic.php?pid=1775592 the forum], may be incomplete/incorrect so far. Move above [[#Troubleshooting]] or even inside [[#Installing packages]]?}}<br />
<br />
When successfully completing a package transaction, ''pacman'' performs the following high-level steps:<br />
<br />
# ''pacman'' obtains the to-be installed package file for all packages queued in a transaction.<br />
# ''pacman'' performs various checks that the packages can likely be installed.<br />
# If pre-existing ''pacman'' {{ic|PreTransaction}} hooks apply, they are executed.<br />
# Each package is installed/upgraded/removed in turn.<br />
## If the package has an install script, its {{ic|pre_install}} function is executed (or {{ic|pre_upgrade}} or {{ic|pre_remove}} in the case of an upgraded or removed package).<br />
## ''pacman'' deletes all the files from a pre-existing version of the package (in the case of an upgraded or removed package). However, files that were marked as configuration files in the package are kept (see [[Pacman/Pacnew and Pacsave]]).<br />
## ''pacman'' untars the package and dumps its files into the file system (in the case of an installed or upgraded package). Files that would overwrite kept, and manually modified, configuration files (see previous step), are stored with a new name (.pacnew).<br />
## If the package has an install script, its {{ic|post_install}} function is executed (or {{ic|post_upgrade}} or {{ic|post_remove}} in the case of an upgraded or removed package).<br />
# If ''pacman'' {{ic|PostTransaction}} hooks that exist at the end of the transaction apply, they are executed.<br />
<br />
== See also ==<br />
<br />
* [https://archlinux.org/pacman/ Pacman Home Page]<br />
* {{man|3|libalpm}}<br />
* {{man|8|pacman}}<br />
* {{man|5|pacman.conf}}<br />
* {{man|8|repo-add}}</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=Chroot&diff=651804Chroot2021-02-10T13:14:40Z<p>Cmsigler: Add troubleshooting, with fix for arch-chroot mountpoint warning</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:System recovery]]<br />
[[Category:Sandboxing]]<br />
[[Category:Commands]]<br />
[[de:Chroot]]<br />
[[es:Chroot]]<br />
[[fa:تغییر ریشه]]<br />
[[fr:Chroot]]<br />
[[ja:Chroot]]<br />
[[pt:Chroot]]<br />
[[ru:Chroot]]<br />
[[zh-hans:Chroot]]<br />
{{Related articles start}}<br />
{{Related|PRoot}}<br />
{{Related|Linux Containers}}<br />
{{Related|systemd-nspawn}}<br />
{{Related articles end}}<br />
A [[Wikipedia:Chroot|chroot]] is an operation that changes the apparent root directory for the current running process and their children. A program that is run in such a modified environment cannot access files and commands outside that environmental directory tree. This modified environment is called a ''chroot jail''.<br />
<br />
== Reasoning ==<br />
<br />
Changing root is commonly done for performing system maintenance on systems where booting and/or logging in is no longer possible. Common examples are:<br />
<br />
* Reinstalling the [[bootloader]].<br />
* Rebuilding the [[mkinitcpio|initramfs image]].<br />
* Upgrading or [[downgrading packages]].<br />
* Resetting a [[Password recovery|forgotten password]].<br />
* Building packages in a clean chroot, see [[DeveloperWiki:Building in a clean chroot]].<br />
<br />
See also [[Wikipedia:Chroot#Limitations]].<br />
<br />
== Requirements ==<br />
<br />
* Root privilege.<br />
* Another Linux environment, e.g. a LiveCD or USB flash media, or from another existing Linux distribution.<br />
* Matching architecture environments; i.e. the chroot from and chroot to. The architecture of the current environment can be discovered with: {{ic|uname -m}} (e.g. i686 or x86_64).<br />
* Kernel modules loaded that are needed in the chroot environment.<br />
* [[Swap]] enabled if needed: {{bc|# swapon /dev/sd''xY''}}<br />
* Internet connection established if needed.<br />
<br />
== Usage ==<br />
<br />
{{Note|<br />
* Some [[systemd]] tools such as ''hostnamectl'', ''localectl'' and ''timedatectl'' can not be used inside a chroot, as they require an active [[dbus]] connection. [https://github.com/systemd/systemd/issues/798#issuecomment-126568596]<br />
* The file system that will serve as the new root ({{ic|/}}) of your chroot must be accessible (i.e., decrypted, mounted).<br />
}}<br />
<br />
There are two main options for using chroot, described below.<br />
<br />
=== Using arch-chroot ===<br />
<br />
The bash script {{ic|arch-chroot}} is part of the {{Pkg|arch-install-scripts}} package. Before it runs {{ic|/usr/bin/chroot}}, the script mounts API filesystems like {{ic|/proc}} and makes {{ic|/etc/resolv.conf}} available from the chroot.<br />
<br />
==== Enter a chroot ====<br />
<br />
Run arch-chroot with the new root directory as first argument:<br />
<br />
# arch-chroot ''/location/of/new/root''<br />
<br />
For example, in the [[installation guide]] this directory would be {{ic|/mnt}}:<br />
<br />
# arch-chroot /mnt<br />
<br />
To exit the chroot simply use:<br />
<br />
# exit<br />
<br />
==== Run a single command and exit ====<br />
<br />
To run a command from the chroot, and exit again append the command to the end of the line:<br />
<br />
# arch-chroot ''/location/of/new/root'' ''mycommand''<br />
<br />
For example, to run {{ic|mkinitcpio -p linux}} for a chroot located at {{ic|/mnt/arch}} do:<br />
<br />
# arch-chroot /mnt/arch mkinitcpio -p linux<br />
<br />
=== Using chroot ===<br />
<br />
{{Warning|When using {{ic|--rbind}}, some subdirectories of {{ic|dev/}} and {{ic|sys/}} will not be unmountable. Attempting to unmount with {{ic|umount -l}} in this situation will break your session, requiring a reboot. If possible, use {{ic|-o bind}} instead.}}<br />
<br />
In the following example {{ic|''/location/of/new/root''}} is the directory where the new root resides.<br />
<br />
First, mount the temporary API filesystems:<br />
<br />
# cd ''/location/of/new/root''<br />
# mount -t proc /proc proc/<br />
# mount -t sysfs /sys sys/<br />
# mount --rbind /dev dev/<br />
<br />
And optionally:<br />
<br />
# mount --rbind /run run/<br />
<br />
If you are running a UEFI system you will also need access to EFI variables. Otherwise, when installing GRUB you will receive a message similar to: {{ic|UEFI variables not supported on this machine}}:<br />
<br />
# mount --rbind /sys/firmware/efi/efivars sys/firmware/efi/efivars/<br />
<br />
Next, in order to use an internet connection in the chroot environment copy over the DNS details:<br />
<br />
# cp /etc/resolv.conf etc/resolv.conf<br />
<br />
Finally, to change root into {{ic|''/location/of/new/root''}} using a bash shell:<br />
<br />
# chroot ''/location/of/new/root'' /bin/bash<br />
<br />
{{Note|If you see the error:<br />
* {{ic|chroot: cannot run command '/usr/bin/bash': Exec format error}}, it is likely that the architectures of the host environment and chroot environment do not match.<br />
* {{ic|chroot: '/usr/bin/bash': permission denied}}, remount with the execute permission: {{ic|mount -o remount,exec ''/location/of/new/root''}}.<br />
** if checking this didn't help, then [https://www.tldp.org/LDP/LG/issue52/okopnik.html make sure] the base components of the new enviroment are intact (if it's an Arch root try {{ic|1=paccheck --root=''/location/of/new/root'' --files --file-properties --md5sum glibc filesystem}}, from {{Pkg|pacutils}})<br />
}}<br />
<br />
After chrooting it may be necessary to load the local bash configuration:<br />
<br />
# source /etc/profile<br />
# source ~/.bashrc<br />
<br />
{{Tip|Optionally, create a unique prompt to be able to differentiate your chroot environment:<br />
{{bc|1=# export PS1="(chroot) $PS1"}}<br />
}}<br />
<br />
When finished with the chroot, you can exit it via:<br />
<br />
# exit<br />
<br />
Then unmount the temporary file systems:<br />
<br />
# cd /<br />
# umount --recursive ''/location/of/new/root''<br />
<br />
{{Note|If there is an error mentioning something like: {{ic|umount: /path: device is busy}} this usually means that either: a program (even a shell) was left running in the chroot or that a sub-mount still exists. Quit the program and use {{ic|findmnt -R ''/location/of/new/root''}} to find and then {{ic|umount}} sub-mounts. It may be tricky to {{ic|umount}} some things and one can hopefully have {{ic|umount --force}} work, as a last resort use {{ic|umount --lazy}} which just releases them. In either case to be safe, {{ic|reboot}} as soon as possible if these are unresolved to avoid possible future conflicts.}}<br />
<br />
== Run graphical applications from chroot ==<br />
<br />
If you have an [[X server]] running on your system, you can start graphical applications from the chroot environment.<br />
<br />
To allow the chroot environment to connect to an X server, open a virtual terminal inside the X server (i.e. inside the desktop of the user that is currently logged in), then run the [[xhost]] command, which gives permission to anyone to connect to the user's X server (see also [[Xhost]]):<br />
<br />
$ xhost +local:<br />
<br />
Then, to direct the applications to the X server from chroot, set the DISPLAY environment variable inside the chroot to match the DISPLAY variable of the user that owns the X server. So for example, run:<br />
<br />
$ echo $DISPLAY<br />
<br />
as the user that owns the X server to see the value of DISPLAY. If the value is ":0" (for example), then in the chroot environment run:<br />
<br />
# export DISPLAY=:0<br />
<br />
== Without root privileges ==<br />
<br />
Chroot requires root privileges, which may not be desirable or possible for the user to obtain in certain situations. There are, however, various ways to simulate chroot-like behavior using alternative implementations.<br />
<br />
=== PRoot ===<br />
<br />
[[PRoot]] may be used to change the apparent root directory and use {{ic|mount --bind}} without root privileges. This is useful for confining applications to a single directory or running programs built for a different CPU architecture, but it has limitations due to the fact that all files are owned by the user on the host system. PRoot provides a {{ic|--root-id}} argument that can be used as a workaround for some of these limitations in a similar (albeit more limited) manner to ''fakeroot''.<br />
<br />
=== Fakechroot ===<br />
<br />
{{Pkg|fakechroot}} is a library shim which intercepts the chroot call and fakes the results. It can be used in conjunction with {{Pkg|fakeroot}} to simulate a chroot as a regular user. <br />
<br />
# fakechroot fakeroot chroot ~/my-chroot bash<br />
<br />
== Troubleshooting ==<br />
<br />
=== arch-chroot: ''/location/of/new/root'' is not a mountpoint. This may have undesirable side effects. ===<br />
<br />
Upon executing {{ic|arch-chroot ''/location/of/new/root''}} a warning is issued:<br />
<br />
{{ic|<nowiki>==</nowiki>> WARNING: ''/location/of/new/root'' is not a mountpoint. This may have undesirable side effects.}}<br />
<br />
See [https://man.archlinux.org/man/arch-chroot.8 arch-chroot(8)] for an explanation and an example of using bind mounting to make the chroot a mountpoint.<br />
<br />
== See also ==<br />
<br />
* [https://help.ubuntu.com/community/BasicChroot Basic Chroot]</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=650537User:Cmsigler/RISC-V2021-02-02T16:19:48Z<p>Cmsigler: Rearrange to move Experimental Development near top of page</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}}; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|(chroot) # emerge --sync}}<br />
*#**# Double-check system profile (for amd64 systemd) is correct {{bc|(chroot) # eselect profile list}}<br />
*#**# {{bc|(chroot) # emerge @world}}<br />
*#**# Configure {{ic|/etc/timezone}}<br />
*#**# {{bc|(chroot) # emerge --config sys-libs/timezone-data}}<br />
*#**# Configure {{ic|/etc/locale.gen}}<br />
*#**# {{bc|(chroot) # locale-gen}}<br />
*#**# Configure {{ic|/etc/env.d/02locale}}<br />
*#**# {{bc|(chroot) # env-update && source /etc/profile && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure {{ic|/etc/conf.d/hostname}}<br />
*#**# Exit chroot environment {{bc|(chroot) # exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|(chroot) # emerge --sync}} {{bc|(chroot) # emerge @world}}<br />
*#* Arch:<br />
*#** Install base, base-devel {{bc|$ sudo pacstrap -c ./subdir/ base base-devel}}<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=650535User:Cmsigler/RISC-V2021-02-02T16:16:19Z<p>Cmsigler: Add notes on recent work; add link</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Notes 2021/01/31</u>:'''<br />
<br />
Finally making some more progress:<br />
<br />
* Tested creating a Gentoo amd64/systemd chroot by unpacking a stage3 tarball into a target subdir, then pre-configuring, chrooting (via arch-chroot) into the target, and updating and configuring as needed the chroot system. This seems to work fine. Also installed a minimal X windows program, qemacs, inside the chroot and ran it after [[Chroot#Run_graphical_applications_from_chroot|allowing X windows programs access to the parent X server]].<br />
* Installed Arch base and base-devel into a target subdir via {{ic|pacstrap -c}}, then chrooted (via arch-chroot) into the target and configured the installation for timezone, locale and hostname.<br />
* Wrote, tested and debugged a few simple C programs on x86_64 in order to test compilation for the RISC-V riscv64-lp64d target. Then, after installing [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc] to cross-compile, compiled and linked these to produce riscv64-lp64d binaries. After installing [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], tested running them with {{ic|qemu-riscv64 -L /usr/riscv64-linux-gnu/}}.<br />
<br />
----<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}}; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|(chroot) # emerge --sync}}<br />
*#**# Double-check system profile (for amd64 systemd) is correct {{bc|(chroot) # eselect profile list}}<br />
*#**# {{bc|(chroot) # emerge @world}}<br />
*#**# Configure {{ic|/etc/timezone}}<br />
*#**# {{bc|(chroot) # emerge --config sys-libs/timezone-data}}<br />
*#**# Configure {{ic|/etc/locale.gen}}<br />
*#**# {{bc|(chroot) # locale-gen}}<br />
*#**# Configure {{ic|/etc/env.d/02locale}}<br />
*#**# {{bc|(chroot) # env-update && source /etc/profile && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure {{ic|/etc/conf.d/hostname}}<br />
*#**# Exit chroot environment {{bc|(chroot) # exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|(chroot) # emerge --sync}} {{bc|(chroot) # emerge @world}}<br />
*#* Arch:<br />
*#** Install base, base-devel {{bc|$ sudo pacstrap -c ./subdir/ base base-devel}}<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/community/x86_64/riscv64-linux-gnu-gcc/ riscv64-linux-gnu-gcc], [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=650476User:Cmsigler/RISC-V2021-02-02T04:51:15Z<p>Cmsigler: More instructions, etc., on work done</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup, testing for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
*# Test running x86_64/amd64 installations via chroot (or systemd-nspawn)<br />
*#* Gentoo:<br />
*#** Create a subdirectory for the chroot installation; untar Gentoo amd64 stage3 tarball into subdirectory {{bc|$ cd ./subdir/ && sudo tar xpvf stage3-*.tar.xz --xattrs-include<nowiki>=</nowiki>'*.*' --numeric-owner && cd ..}}<br />
*#** Configure and emerge:<br />
*#**# Configure Gentoo compilation options in {{ic|./subdir/etc/portage/make.conf}}; copy DNS info into {{ic|./subdir/etc/resolv.conf}}<br />
*#**# Enter chroot environment {{bc|$ sudo arch-chroot ./subdir/}}<br />
*#**# {{bc|# source /etc/profile && source $HOME/.bashrc && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# {{bc|(chroot) # emerge --sync}}<br />
*#**# Double-check system profile (for amd64 systemd) is correct {{bc|(chroot) # eselect profile list}}<br />
*#**# {{bc|(chroot) # emerge @world}}<br />
*#**# Configure {{ic|/etc/timezone}}<br />
*#**# {{bc|(chroot) # emerge --config sys-libs/timezone-data}}<br />
*#**# Configure {{ic|/etc/locale.gen}}<br />
*#**# {{bc|(chroot) # locale-gen}}<br />
*#**# Configure {{ic|/etc/env.d/02locale}}<br />
*#**# {{bc|(chroot) # env-update && source /etc/profile && export PS1<nowiki>=</nowiki>"(chroot) $<nowiki>{</nowiki>PS1<nowiki>}</nowiki>"}}<br />
*#**# Configure {{ic|/etc/conf.d/hostname}}<br />
*#**# Exit chroot environment {{bc|(chroot) # exit}}<br />
*#** Chroot (or systemd-nspawn) into subdirectory and test normal administrative operations, e.g., {{bc|(chroot) # emerge --sync}} {{bc|(chroot) # emerge @world}}<br />
*#* Arch:<br />
*#** Install base, base-devel {{bc|$ sudo pacstrap -c ./subdir/ base base-devel}}<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsiglerhttps://wiki.archlinux.org/index.php?title=User:Cmsigler/RISC-V&diff=650458User:Cmsigler/RISC-V2021-02-02T00:04:08Z<p>Cmsigler: Add new, recently announced target hardware; add example command to run RISC-V binary</p>
<hr />
<div>'''Working Document: Planning to Port Arch to RISC-V -- please edit :)'''<br />
<br />
Please edit this to your heart's content, without destroying prior work of value, or of historical or reference significance. I would like to construct a plan for porting Arch to the newly emerging RISC-V hardware.<br />
<br />
I am very much a novice to this. I've had success running ARM SBCs via [https://archlinuxarm.org/ ArchLinuxArm], but I've never ported anything :\<br />
<br />
== A simple RISC-V logo ==<br />
<br />
I quickly created a personal, non-professional, simple text logo for RISC-V. I don't have sysop upload privilege, so here's a link to view:<br />
<br />
https://drive.google.com/file/d/1WKZLl0G_BriPsmRQaMz6yGtIcjJ7pqmd/view?usp=sharing<br />
<br />
I license this logo under CC BY-SA 2.0. If it is of any interest, feel free to use it with attribution.<br />
<br />
== Target hardware ==<br />
<br />
# '''<u>α</u>''' -- [https://www.cnx-software.com/2020/11/09/xuantie-c906-based-allwinner-risc-v-processor-to-power-12-linux-sbcs/ Unnamed Allwinner single-core XuanTie C906 64-bit RISC-V (RV64GCV) processor] @ up to 1 GHz; 22nm manufacturing process. See also [http://linuxgizmos.com/risc-v-based-allwinner-chip-to-debut-on-13-linux-hacker-board/ this LinuxGizmos article].<br />
#* This first one is nice because it's cheap (US$12.50 IIUC), albeit underpowered compared to ARM SoC boards available at this time (November, 2020).<br />
# '''<u>β</u>''' -- RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring [https://www.cnx-software.com/2020/09/04/picorio-linux-risc-v-sbc-is-an-open-source-alternative-to-raspberry-pi-board/ PicoRio RISC-V SBC] to market at a price point similar to Raspberry Pi. See also [https://riscv.org/blog/2020/11/picorio-the-raspberry-pi-like-small-board-computer-for-risc-v/ this blog article from RISC-V International].<br />
#* This second one is also projected to cost close to the RPi.<br />
# '''<u>γ</u>''' -- BeagleBoard.org and Seeed unveiled an open-spec, $119-and-up [https://beagleboard.org/beaglev “BeagleV” SBC] with a StarFive JH7100 SoC with dual SiFive U74 RISC-V cores, 1-TOPS NPU, DSP, and VPU. The SBC ditches the Cape expansion for a Pi-like 40-pin GPIO. See also [http://linuxgizmos.com/beaglev-sbc-runs-linux-on-ai-enabled-risc-v-soc/ this LinuxGizmos article], and [https://www.extremetech.com/computing/319187-new-beagle-board-offers-dual-core-risc-v-targets-ai-applications this ExtremeTech article].<br />
#* Initial prices are given as US$119 and US$149, which is much more affordable than the original system mobo.<br />
<br />
(Any others? I don't plan on spending US$1,000 on a development system/motherboard.)<br />
<br />
== Basic thoughts and ideas for porting ==<br />
<br />
'''<u>Additional info 2020/11/14</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=84187 eschwartz], here are links to resources produced by [https://bbs.archlinux.org/profile.php?id=47848 FelixOnMars], who has tackled some RISC-V porting in his spare time:<br />
<br />
* [https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
* [https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
There's a lot of finished work he's done -- good stuff :) and one shouldn't be spending most of one's efforts reinventing the wheel ;)<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/13</u>:'''<br />
<br />
In essence, what I'm looking to do is:<br />
<br />
# Build RISC-V pkgs for Arch.<br />
# Build bare-metal RISC-V Arch disk images that can potentially be booted on physical hardware. Of course, device tree/drivers will have to be individually customized for each mainboard/SBC. Getting a disk image that works in KVM/QEMU would scratch my itch of getting most of the work done. Someone with more bare-metal hardware/driver work may have to put the finishing touches on it.<br />
<br />
Hoping my understanding and itemized steps are more-or-less correct.... Methods to make porting progress:<br />
<br />
# So, if we want to build Arch RISC-V pkgs, we can simply cross-compile them. When a pkg won't build, we patch it until it does (assuming we can eventually get it working).<br />
#* Note that Vadim Kaushan (Disasm on github) has patched upstream [https://github.com/oaken-source/parabola-cross-bootstrap parabola-cross-bootstrap] from Andreas Grapentin (oaken-source on github) to create [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]. The last commit was January, 2019. Some things may have b0rken but it's worth trying (and patching) this to cross compile base-devel pkgs to RISC-V. Then we can perhaps extend its coverage to more repository pkgs.<br />
# We can unpack a RISC-V stage tarball (or image). Then we can chroot, or use something really fancy like systemd-nspawn, into that tree and use QEMU user-mode emulation to run RISC-V binaries.<br />
#* Question: Will this even work? Answer: Yes! Host needs extra, AUR pkgs [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra] and/or [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]), and [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]). Copy required binaries down into the chroot tree, and/or [[Binfmt_misc_for_Java#Registering_file_type_with_binfmt_misc|take care of binfmt_misc support]].<br />
#* The AUR pkg [https://aur.archlinux.org/packages/proot/ proot], [[PRoot|PRoot wiki page]], may be useful.<br />
#* Note that to run dynamically linked binaries the linker path must be passed; see [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux] (also listed below). Quote:<br />
#** If you want a dynamically-linked executable, you've to pass the linker path too:<br>arm-linux-gnueabihf-gcc -ohello hello.c<br>qemu-arm -L /usr/arm-linux-gnueabihf/ ./hello # or qemu-arm-static<br />
#* Some references:<br />
#** [[QEMU#Chrooting_into_arm/arm64_environment_from_x86_64|Chrooting into arm/arm64 environment from x86_64]]<br />
#** [https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/ Transparently running binaries from any architecture in Linux with QEMU and binfmt_misc]<br />
#** [https://unix.stackexchange.com/questions/41889/how-can-i-chroot-into-a-filesystem-with-a-different-architechture Stackexchange: Chroot into a filesystem with a different architecture]<br />
#** [https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 How to set up binfmt_misc for qemu the hard way]<br />
#** [https://wiki.gentoo.org/wiki/Embedded_Handbook/General/Compiling_with_qemu_user_chroot Gentoo Embedded Handbook/Compiling with qemu user chroot]<br />
#** [https://wiki.debian.org/QemuUserEmulation Debian QEMU User Emulation]<br />
#** [https://gist.github.com/Liryna/10710751 Running ARM Programs under Linux]<br />
#** [[Creating_packages_for_other_distributions#Creating_Arch_packages_in_OBS_with_OSC|Creating Arch pkgs in openSUSE Open Build Service]]; hmmm, pkgs could be built automatically and remotely...<br />
#* I'm not an lxc/Docker/Kubernetes guy, but IIUC this would be roughly similar to running a RISC-V container (no?). This may come in handy to make sure binary packages work as expected, I suppose. But it doesn't seem like a better way to build RISC-V pkgs than cross-compiling, although we could build dev infrastructure then pkgs inside the container.<br />
# We can create a VM disk image (GPT, of course) and unpack inside it, e.g., [https://dev.gentoo.org/~dilfridge/stages/ a RISC-V Gentoo stage3] and complete installation and configuration. We also need to compile a RISC-V kernel and bbl (Berkeley Boot Loader from [https://github.com/riscv/riscv-pk the riscv-pk github project]) (some instructions [https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ here] and [https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html here]), or we may be able to download pre-built kernel/binaries/utilities. Then we can boot the image with KVM/QEMU.<br />
#* Also available are [https://wiki.debian.org/RISC-V Debian RISC-V], [https://fedoraproject.org/wiki/Architectures/RISC-V Fedora RISC-V] and [https://en.opensuse.org/openSUSE:RISC-V openSUSE RISC-V] images. In those wiki pages are some instructions for putting together bare-metal disk images.<br />
#* Also, there are ready-made QEMU images that one can download and run. See [https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian], [https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora], [http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance additional Fedora], [https://en.altlinux.org/Regular/riscv64 AltLinux]. I guess this ready-made solution would be the ultimate in ease of use, but it doesn't solve the problem of building from scratch. But at least inside a running VM I could download and build dev infrastructure (pacman, et al) and then build pkgs as if I were running on bare metal. It might be only a little slower than running on a SBC?...<br />
<br />
So, in the end, the best solutions seem to be:<br />
* Cross-compiling using existing scripting/tooling to build Arch pkgs. [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap] should build base-devel pkgs. Using its scripts with modified PKGBUILDs, it should be possible to build other repository pkgs.<br />
* To test cross-compiled pkgs, create a chroot to use via QEMU user-mode emulation (after setting up [https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] and [[Binfmt_misc_for_Java#Registering_the_file_type_with_binfmt_misc|binfmt_misc]]; note [https://aur.archlinux.org/packages/proot/ proot]). Follow the usual installation instructions using pacstrap, etc., to install RISC-V pkgs inside the chroot, then complete installation and configuration.<br />
** Note that this creates a running RISC-V Arch system.... Tinker with it to your heart's content. Then you could make an archive (tar) snapshot of it and use this as the basis of a bare-metal VM image :)<br />
** However, since these cross-compiled pkgs will contain dynamically linked binaries, I'm not sure ATM how binfmt_misc will work to run them with the "-L" flag to reference the RISC-V linker path? Although, once inside the chroot will user-mode QEMU just use the .so libraries inside the chroot?....<br />
* For final image building, create a VM disk image, follow the usual installation instructions to install an Arch RISC-V system inside the image, configure it just like a bare-metal installation, and see if it works :)<br />
* When running the VM disk image like bare metal, bootloading to boot the kernel will need to work. Otherwise, the bootloader and kernel used to boot and run the VM will have to be exterior to the disk image. We want to create a turnkey disk image including bootloading....<br />
<br />
----<br />
<br />
'''<u>Revised 2020/11/11</u>:'''<br />
<br />
Thanks to [https://bbs.archlinux.org/profile.php?id=36741 Awebb] for helping to distill disorganized ideas.<br />
<br />
Perhaps the best way to make progress on porting to RISC-V (mainboard and embedded/SBC) is simply to build Arch packages. The bootstrapping and embedded intricacies can be dealt with later. Plus, building packages can be done bit by by when time is available. So....<br />
<br />
There are two ways to build packages, I believe:<br />
<br />
# Cross-compile using available [[#ArchTools|Arch RISC-V tooling]] (see below)<br />
# Install an existing RISC-V system in a KVM/QEMU VM<br />
<br />
Frankly, the VM method seems the easiest (and laziest) way. (Update: Probably not as easy/lazy as I was thinking.) There are [https://dev.gentoo.org/~dilfridge/stages/ Gentoo stage3 tarballs] so, assuming RISC-V support in KVM/QEMU is generally bug-free, setting up a RISC-V Gentoo VM should be a mechanical process. After this, Arch tools would be compiled inside the VM, and then base, base-devel and then core pkgs would be built. (For related information, see [https://wiki.gentoo.org/wiki/Raspberry_Pi the Gentoo 32-bit RaspberryPi page] and [https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install the Gentoo 64-bit RaspberryPi page].)<br />
<br />
[[#RISC-V-QEMU|See links below]] for documentation and instructions.<br />
<br />
----<br />
<br />
'''<u>Original text</u>:'''<br />
<br />
According to the initial press release on the above '''<u>α</u>''' hardware target/SoC board, this "Allwinner RISC-V processor will run the Debian Linux operating system (Tina OS)." Provided that "Tina OS" licensing is "clean," one should be able to boot the supplied Debian OS and use that to develop an Arch bootstrap, or perhaps an entire port?<br />
<br />
IIUC, that should include the bootstrapping and device tree stuff along with device drivers for on-board equipment. (Drivers for optional devices and for devices on available modules/daughter cards will need to be developed, too.)<br />
<br />
After that, base and base-devel packages need to be built. Then building packages can be done on bare metal. However, it would obviously be faster and easier to build them using the available RISC-V toolchain in Arch. Right? (I wonder how [https://archlinuxarm.org/ ArchLinuxArm] handles building packages for and maintaining their repositories, as well as their AUR? Note to self: Research ALARM infrastructure....)<br />
<br />
Topics to be researched and written up or references pointed to:<br />
<br />
* HOWTO bootstrap a new port from ground zero -- See references below<br />
* How the bootstrapping process is booted/initiated<br />
* How device trees work<br />
* How device drivers work in bootstrapping, e.g., via initrd image<br />
* Cross-building Arch packages for RISC-V<br />
* Infra for a new port's repositories (CB/CI tools, repository hosting, front end w/hosting, etc.)<br />
* Infra for a new port's AUR<br />
* Possibility of free/open (non-binary blob) bootstrapping image/BIOS (similar to [https://www.coreboot.org/ coreboot])<br />
<br />
== Necessary tools for porting ==<br />
<br />
=== <span id="RISC-V-QEMU"></span>KVM/QEMU RISC-V emulation documentation, information and instructions ===<br />
<br />
[https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra pkg] required for RISC-V (and other architecture) full-system emulation<br />
<br />
[https://aur.archlinux.org/packages/qemu-user-static/ qemu-user-static] (or <br />
[https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin]) for running QEMU user-mode emulation inside a chroot; [https://aur.archlinux.org/packages/binfmt-qemu-static/ binfmt-qemu-static] (or [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]) for these<br />
<br />
[[QEMU#Installation|Arch QEMU pkgs and variants from repository and AUR]]<br />
<br />
[https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html RISC-V Linux on QEMU Getting Started Guide]<br />
<br />
[https://wiki.qemu.org/Documentation/Platforms/RISCV QEMU RISC-V Documentation]<br />
<br />
(See also: [https://www.sifive.com/blog/risc-v-qemu-part-2-the-risc-v-qemu-port-is-upstream SiFive RISC-V QEMU Upstream Announcement])<br />
<br />
[https://www.cnx-software.com/2018/03/16/how-to-run-linux-on-risc-v-with-qemu-emulator/ How To Run Linux on RISC-V with QEMU Emulator]<br />
<br />
=== <span id="ArchTools"></span>Tools available under Arch ===<br />
<br />
==== Tools from repositories ====<br />
<br />
[https://www.archlinux.org/groups/x86_64/risc-v/ RISC-V group]<br />
<br />
[https://www.archlinux.org/packages/?sort=&q=riscv&maintainer=&flagged= Results of "riscv" keyword search]<br />
<br />
==== Tools from AUR ====<br />
<br />
[https://aur.archlinux.org/packages/?K=RISCV&SB=p Results of AUR "RISCV" keyword search]<br />
<br />
[https://aur.archlinux.org/packages/?O=0&SeB=nd&K=risc-v&outdated=&SB=n&SO=a&PP=50&do_Search=Go Results of AUR "risc-v" keyword search]<br />
<br />
=== Tools from other dists ===<br />
<br />
[https://dev.gentoo.org/~dilfridge/stages/ Experimental Gentoo stages]<br />
<br />
[https://wiki.debian.org/RISC-V#OS_.2F_filesystem_images Debian images]<br />
<br />
[https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/ Fedora images]<br />
<br />
[http://fedora.riscv.rocks/koji/tasks?order=-completion_time&state=closed&view=flat&method=createAppliance Additional Fedora images]<br />
<br />
[https://en.altlinux.org/Regular/riscv64 AltLinux RISC-V64 port]<br />
<br />
=== Non-Arch tools ===<br />
<br />
[http://crosstool-ng.github.io/ Crosstool-ng]<br />
<br />
[https://buildroot.org/ Buildroot]<br />
<br />
[https://trac.clfs.org/ Cross-Linux From Scratch] (this project seems to be dormant...)<br />
<br />
== As-is experimental RISC-V pkgs from work done by FelixOnMars ==<br />
<br />
[https://github.com/felixonmars/archriscv-packages Arch RISC-V Packages]<br />
<br />
[https://archriscv.felixc.at/repo/ Arch RISC-V Repo]<br />
<br />
== References -- '''PLEASE''' add to and curate this list! ==<br />
<br />
[https://readthedocs.org/projects/risc-v-getting-started-guide/downloads/pdf/latest/ RISC-V Getting Started Guide]<br />
<br />
https://github.com/archlinux-riscv/archlinux-cross-bootstrap<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=237370<br />
<br />
https://bbs.archlinux.org/viewtopic.php?id=260639<br />
<br />
https://five-embeddev.com/toolchain/2019/06/26/gcc-targets/<br />
<br />
https://wiki.qemu.org/Documentation/Platforms/RISCV<br />
<br />
https://wiki.gentoo.org/wiki/Project:RISC-V<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi (for embedded build and cross-compiling info)<br />
<br />
https://wiki.gentoo.org/wiki/Raspberry_Pi_3_64_bit_Install<br />
<br />
https://wiki.archlinux.org/index.php/Cross-compiling_tools_package_guidelines<br />
<br />
https://github.com/crosstool-ng/crosstool-ng<br />
<br />
https://git.busybox.net/buildroot<br />
<br />
https://github.com/cross-lfs<br />
<br />
== Experimental development: Procedures and steps used ==<br />
<br />
* Initial setup for further work<br />
*# Git repository setup -- Push commits to gitlab repos, mirror gitlab to forked github repos<br />
*#* Clone RISC-V repositories on github to local repos<br />
*#* Fork RISC-V repositories on github<br />
*#* Create gitlab repositories<br />
*#* Rename github origins for cloned repos, then add gitlab repos as origins<br />
*#* Push local repos to gitlab<br />
*#* Configure gitlab repos to mirror to github forks using a personal access token generated on github<br />
<br />
* RISC-V binary tests and demos<br />
*# Test compiling, linking RISC-V (riscv64-lp64d) C programs<br />
*#* Install [https://www.archlinux.org/packages/extra/x86_64/qemu-arch-extra/ qemu-arch-extra], [https://aur.archlinux.org/packages/qemu-user-static-bin/ qemu-user-static-bin], [https://aur.archlinux.org/packages/binfmt-qemu-static-all-arch/ binfmt-qemu-static-all-arch]<br />
*#* Write hello_world.c, other test programs<br />
*#* Compile: {{bc|$ riscv64-linux-gnu-gcc -c -Wall -o hello_world-riscv64.o hello_world.c}}<br />
*#* Link: {{bc|$ riscv64-linux-gnu-gcc -o hello_world-riscv64 hello_world-riscv64.o}}<br />
*#* Run: {{bc|$ qemu-riscv64 -L /usr/riscv64-linux-gnu/ ./hello_world-riscv64}}<br />
*#* Any additional tests, experiments<br />
*# [https://archriscv.felixc.at/repo/ Repository of pre-built RISC-V pkgs from FelixOnMars]<br />
*#* Configure binfmt-qemu-static to run RISC-V binaries as native programs<br />
*#* Following the Arch installation guide, use pacstrap to install base pkgs into risc-v subtree<br />
*#* Run arch-chroot to run subtree as a RISC-V container<br />
*#* Test running RISC-V binaries from coreutils<br />
*#* Test running other binaries, and installing other RISC-V pkgs from this repository<br />
*#* Test using systemd-nspawn to run subtree as a RISC-V container<br />
<br />
* Cross-bootstrapping to build basic RISC-V system<br />
*# [https://github.com/felixonmars/archriscv-packages RISC-V PKGBUILD repository pkgs from FelixOnMars]<br />
*#* Experiment with cross-compiling pkgs from RISC-V PKGBUILDs<br />
*#* Test building cross-compiled pkgs from all PKGBUILDs<br />
*#* Fix PKGBUILDs for which building fails<br />
*#* Add and commit patches for RISC-V PKGBUILDs to local repo<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
*# [https://github.com/archlinux-riscv/archlinux-cross-bootstrap archlinux-cross-bootstrap]<br />
*#* Use a build tree separate from the git source tree (is this the default?)<br />
*#* Initially, test build of all 4 stages '''without''' creating build/.KEEP_GOING file to ignore build errors; read .MAKEPKGLOG in each build directory, as well as teeing output of overall build process into a log file<br />
*#* Identify pkgs with build errors; debug and patch<br />
*#* Individually test rebuilding pkgs which failed with errors; correct patches<br />
*#* Add and commit patches to local git repository<br />
*#* Push to origin (personal gitlab repo)<br />
*#* Send merge request upstream<br />
<br />
<br />
== Step-by-step "HOWTO" for porting ==<br />
<br />
== Detailed instructions and explanations ==<br />
<br />
== Issues addressed, with work-arounds ==<br />
<br />
== Bug report list -- Please make notes during triage, and move to above "Issues addressed" list when "solved" ==</div>Cmsigler