https://wiki.archlinux.org/api.php?action=feedcontributions&user=Pklaus&feedformat=atomArchWiki - User contributions [en]2024-03-29T01:48:16ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=K8s&diff=625701K8s2020-07-17T13:41:39Z<p>Pklaus: kubernetes-bin uses the deprecated kubernetes/contrib, cannot be recommended in its current state.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[ja:Kubernetes]]<br />
[https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ Kubernetes] is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes is also referred to as k8s.<br />
<br />
== Kubernetes for Arch Linux ==<br />
<br />
There are AUR packages for Kubernetes on Arch Linux:<br />
<br />
* {{AUR|kubectl-bin}} {{AUR|kubelet-bin}} {{AUR|kubeadm-bin}} {{AUR|cni-plugins-bin}}: They install pre-built binaries and and some CNI network plugins without requiring to build them.<br />
<br />
Additionally, a standalone {{Pkg|kubectl}} package is available in the official repos.<br />
<br />
== Kubectl plugins for Arch Linux ==<br />
<br />
[https://kubernetes.io/docs/reference/kubectl/overview/ Kubectl] plugins are independent binaries that can be used to extend<br />
the Kubectl's functionalities by providing additional subcommands.<br />
<br />
There are AUR packages for Kubectl plugins on Arch Linux:<br />
<br />
* {{AUR|kubectl-trace-git}}: Schedule bpftrace programs on your kubernetes cluster using the kubectl.<br />
* {{AUR|kubelogin}}: Kubectl plugin for Kubernetes OpenID Connect authentication (oidc-login).<br />
<br />
== Basic configuration ==<br />
<br />
You may either choose the {{ic|kubeadm}} helper or manually configuring a kubernetes cluster.<br />
<br />
=== Using kubeadm ===<br />
<br />
The following guide is for a one-master-one-slave build, where both nodes are in {{ic|192.168.122.0/24}} network and the master hosts the kubernetes cluster at {{ic|192.168.122.1}}. Note that pods have their own CIDR, assuming {{ic|192.168.123.0/24}} here.<br />
<br />
==== Master ====<br />
<br />
First, setup the configuration file for kubelet service,<br />
{{hc|/etc/kubernetes/kubelet|<br />
KUBELET_ARGS&#61;"--bootstrap-kubeconfig&#61;/etc/kubernetes/bootstrap-kubelet.conf \<br />
--kubeconfig&#61;/etc/kubernetes/kubelet.conf \<br />
--config&#61;/var/lib/kubelet/config.yaml \<br />
--network-plugin&#61;cni \<br />
--pod-infra-container-image&#61;k8s.gcr.io/pause:3.1"<br />
}}<br />
Don't worry for the not yet existing files in the arguments. They will be created during the {{ic|kubeadm}} initialization process. Note that if you are in a proxy environment or have special DNS settings, you should specify the {{ic|resolv.conf}} to be used in containers by adding one more argument<br />
{{bc|1=--resolv-conf=/the/path/to/the/resolv.conf}}<br />
<br />
Then, run<br />
{{bc|# kubeadm init --apiserver-advertise-address&#61;192.168.122.1 --pod-network-cidr&#61;192.168.123.0/24}}<br />
It will show the progress of initialization and stuck later, complaining about something like<br />
[kubelet-check] It seems like the kubelet isn't running or healthy.<br />
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.<br />
<br />
At this moment, [[start]] {{ic|kubelet.service}}. It is anticipated that kubelet will launch some kubernetes components, which will be confirmed by {{ic|kubeadm}}. If done successfully, there should be a message like:<br />
<br />
Your Kubernetes master has initialized successfully!<br />
<br />
Then you can configure your account as the administrator of this newly-created kubernetes cluster,<br />
<br />
$ mkdir -p $HOME/.kube<br />
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config<br />
# chown $(id -u):$(id -g) $HOME/.kube/config<br />
<br />
Then you can deploy a pod network. Many choices can be found [https://kubernetes.io/docs/concepts/cluster-administration/addons/ here]. Note that all the options have their own default pod network CIDR. Thus, you should modify those settings according to what was given in {{ic|--pod-network-cidr}}.<br />
<br />
Finally, check the health of this master,<br />
$ kubectl get componentstatus<br />
<br />
==== Node ====<br />
<br />
Join the cluster by simply type in the final line of master's successful message,<br />
kubeadm join --token <token> 192.168.122.1:6443 --discovery-token-ca-cert-hash sha256:<hash><br />
<br />
== Trouble shooting ==<br />
<br />
=== settings behind proxy ===<br />
<br />
{{ic|kubeadm}} reads the {{ic|https_proxy}}, {{ic|http_proxy}}, and {{ic|no_proxy}} environment variables. Kubernetes internal networking should be included in the latest one, for example<br />
export no_proxy="192.168.122.0/24,10.96.0.0/12,192.168.123.0/24"<br />
where the second one is the default service network CIDR.<br />
<br />
You may also need extra CNI plugins<br />
$ go get -d github.com/containernetworking/plugins<br />
$ cd ~/go/src/github.com/containernetworking/plugins<br />
$ bash ./build_linux.sh <br />
# cp bin/* /opt/cni/bin/<br />
<br />
=== fatal error: runtime: out of memory ===<br />
This might happen when building kubernetes from source. A known trick is to setup a {{ic|zram}} region:<br />
# modprobe zram<br />
# echo lz4 > /sys/block/zram0/comp_algorithm<br />
# echo 16G > /sys/block/zram0/disksize<br />
# mkswap --label zram0 /dev/zram0<br />
# swapon --priority 100 /dev/zram0<br />
<br />
=== error when creating "xxx.yaml": No API token found for service account "default" ===<br />
Please check the details on [https://stackoverflow.com/questions/31891734/not-able-to-create-pod-in-kubernetes stackoverflow].<br />
<br />
=== Error: unable to load server certificate ===<br />
This might happen when start a service. Check if any of the {{ic|*.key}} files' permission setting is not appropriate.</div>Pklaushttps://wiki.archlinux.org/index.php?title=Baloo&diff=580668Baloo2019-08-21T11:09:02Z<p>Pklaus: /* Disabling the indexer */ There is now 'stop' command (anymore?)</p>
<hr />
<div>[[Category:Search]]<br />
[[Category:KDE]]<br />
[https://community.kde.org/Baloo Baloo] is a file indexing and searching framework for [[KDE]] Plasma. <br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|baloo}} package.<br />
<br />
== Usage and configuration ==<br />
<br />
In order to search using Baloo on the Plasma desktop, start [[KRunner]] (default keyboard shortcut {{ic|ALT+F2}}) and type in your query. Within Dolphin press {{ic|CTRL+F}}. Alternatively, for command-line usage, there is {{ic|baloosearch [OPTIONS] query}}, which supports<br />
complex queries such as {{ic|1=baloosearch //?query=tag:coolpicture AND width:100}},<br />
and {{ic|balooshow [OPTIONS] filename}} to show the data baloo has stored for the file.<br />
<br />
By default the Desktop Search KCM exposes only two options: A panel to blacklist folders and a way to disable it with one click. Alternatively you can edit your {{ic|~/.config/baloofilerc}} file ([https://community.kde.org/Baloo/Configuration info]). <br />
<br />
Additionally the {{ic|balooctl}} process can also be used to control Baloo, e.g. in order to stop/start Baloo use {{ic|balooctl stop}} or {{ic|balooctl start}} to resume.<br />
<br />
Once you added additional folders to the blacklist or disabled Baloo entirely, a process named {{ic|baloo_file_cleaner}} removes all unneeded index files automatically. These are stored under {{ic|~/.local/share/baloo/}}.<br />
<br />
== Indexing a removable or remote device ==<br />
<br />
By default every removable and remote device is blacklisted. It is possible to remove devices from the blacklist in the KCM panel.<br />
<br />
== Disabling the indexer ==<br />
<br />
To disable the Baloo file indexer:<br />
$ balooctl suspend<br />
$ balooctl disable<br />
<br />
The indexer will be disabled on next login.<br />
<br />
Alternatively, disable ''Enable File Search'' in ''System settings'' under ''Search > File search''.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Inotify folder watch limit error ===<br />
<br />
If you get the following error:<br />
<br />
KDE Baloo Filewatch service reached the inotify folder watch limit. File changes may be ignored.<br />
<br />
Then you will need to increase the inotify folder watch limit:<br />
<br />
# echo 524288 > /proc/sys/fs/inotify/max_user_watches<br />
<br />
To make changes permanent, create a {{ic|40-max-user-watches.conf}} file:<br />
<br />
{{hc|/etc/sysctl.d/40-max-user-watches.conf|2=<br />
fs.inotify.max_user_watches=524288<br />
}}</div>Pklaushttps://wiki.archlinux.org/index.php?title=Certbot&diff=558528Certbot2018-12-07T09:01:08Z<p>Pklaus: /* systemd */ add notes about random delay for running certbot in non-interactive way</p>
<hr />
<div>[[Category:Networking]]<br />
[[Category:Encryption]]<br />
[[Category:Commands]]<br />
[[ja:Let’s Encrypt]]<br />
[[ru:Certbot]]<br />
[https://github.com/certbot/certbot Certbot] is [https://www.eff.org/ Electronic Frontier Foundation]'s [[ACME]] client, which is written in Python and provides conveniences like automatic web server configuration and a built-in webserver for the HTTP challenge. Certbot is recommended by [https://letsencrypt.org/ Let's Encrypt].<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|certbot}} package.<br />
<br />
Plugins are available for automated configuration and installation of the issued certificates in web servers:<br />
* The [[Nginx]] plugin can be installed with the {{Pkg|certbot-nginx}} package.<br />
* The [[Apache HTTP Server]] plugin can be installed with the {{Pkg|certbot-apache}} package.<br />
<br />
== Configuration ==<br />
<br />
Consult the [https://certbot.eff.org/docs/ Certbot documentation] for more information about creation and usage of certificates.<br />
<br />
{{Expansion|Explain what the Nginx ({{ic|# certbot --nginx}}) and Apache plugins actually do and how they modify the webserver configuration. So far this section targets only the [[#Webroot]] and [[#Manual]] ways.}}<br />
<br />
=== Plugins ===<br />
{{Warning|Configuration files may be rewritten when using a plugin. Creating a '''backup''' first is recommended.}}<br />
<br />
==== Nginx ====<br />
<br />
The plugin {{pkg|certbot-nginx}} provides an automatic configuration for [[nginx]] [[nginx#Server_blocks|server-blocks]]:<br />
<br />
# certbot --nginx<br />
<br />
To renew certificates:<br />
<br />
# certbot renew<br />
<br />
To change certificates without modifying nginx config files:<br />
<br />
# certbot --nginx certonly<br />
<br />
See [https://certbot.eff.org/#arch-nginx Nginx on Arch Linux] for more information and [[#Automatic renewal]] to keep installed certificates valid.<br />
<br />
===== Managing server blocks =====<br />
The following example may be used in each [[nginx#Server_blocks|server-blocks]] when managing these files manually:<br />
{{hc|/etc/nginx/sites-available/example|2=<br />
server {<br />
listen 443 ssl http2;<br />
listen [::]:443 ssl http2; # Listen on IPv6<br />
ssl_certificate /etc/letsencrypt/live/''domain''/fullchain.pem; # managed by Certbot<br />
ssl_certificate_key /etc/letsencrypt/live/''domain''/privkey.pem; # managed by Certbot<br />
include /etc/letsencrypt/options-ssl-nginx.conf;<br />
..<br />
} }}<br />
<br />
See [[nginx#TLS]] for more information.<br />
<br />
It's also possible to create a separated config file and include it in each server block: <br />
<br />
{{hc|/etc/nginx/conf/001-certbot.conf|2=<br />
ssl_certificate /etc/letsencrypt/live/''domain''/fullchain.pem; # managed by Certbot<br />
ssl_certificate_key /etc/letsencrypt/live/''domain''/privkey.pem; # managed by Certbot<br />
include /etc/letsencrypt/options-ssl-nginx.conf;<br />
}}<br />
<br />
{{hc|/etc/nginx/sites-available/example|<nowiki><br />
server {<br />
listen 443 ssl http2;<br />
listen [::]:443 ssl http2; # Listen on IPv6<br />
include conf/001-certbot.conf;<br />
..<br />
}<br />
</nowiki>}}<br />
<br />
=== Webroot ===<br />
{{Note|<br />
* The Webroot method requires '''HTTP on port 80''' for Certbot to validate.<br />
* The Server Name must match that of its corresponding DNS.<br />
* Permissions may need to be altered on the host to allow read-access to {{ic|http://domain.tld/.well-known}}.<br />
}}<br />
<br />
When using the webroot method the Certbot client places a challenge response inside {{ic|/path/to/domain.tld/html/.well-known/acme-challenge/}} which is used for validation.<br />
<br />
The use of this method is recommend over a manual install; it offers automatic renewal and easier certificate management. However the usage of [[#Plugins]] may be the preferred since it allows automatic configuration and installation.<br />
<br />
==== Mapping ACME-challenge requests ====<br />
<br />
{{Accuracy|In the ''webroot'' way, the {{ic|/var/lib/letsencrypt}} path is dictated by ''certbot''. Manual creation is not necessary, that applies to [[#Manual]].}}<br />
<br />
Management of can be made easier by mapping all HTTP-requests for {{ic|.well-known/acme-challenge}} to a single folder, e.g. {{ic|/var/lib/letsencrypt}}.<br />
<br />
The path has then to be writable for Cerbot and the web server (e.g. [[nginx]] or [[Apache]] running as user ''http''):<br />
# mkdir -p /var/lib/letsencrypt/.well-known<br />
# chgrp http /var/lib/letsencrypt<br />
# chmod g+s /var/lib/letsencrypt<br />
<br />
===== nginx =====<br />
<br />
Create a file containing the location block and include this inside a server block:<br />
{{hc|/etc/nginx/conf.d/letsencrypt.conf|<nowiki><br />
location ^~ /.well-known/acme-challenge/ {<br />
allow all;<br />
root /var/lib/letsencrypt/;<br />
default_type "text/plain";<br />
try_files $uri =404;<br />
}<br />
</nowiki>}}<br />
<br />
Example of a server configuration:<br />
{{hc|/etc/nginx/servers-available/domain.conf|<nowiki><br />
server {<br />
server_name domain.tld<br />
..<br />
include conf.d/letsencrypt.conf;<br />
}<br />
</nowiki>}}<br />
<br />
===== Apache =====<br />
Create the file {{ic|/etc/httpd/conf/extra/httpd-acme.conf}}:<br />
{{hc|/etc/httpd/conf/extra/httpd-acme.conf|<nowiki><br />
Alias /.well-known/acme-challenge/ "/var/lib/letsencrypt/.well-known/acme-challenge/"<br />
<Directory "/var/lib/letsencrypt/"><br />
AllowOverride None<br />
Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec<br />
Require method GET POST OPTIONS<br />
</Directory><br />
</nowiki>}}<br />
<br />
Including this in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
{{hc|/etc/httpd/conf/httpd.conf|<nowiki><br />
Include conf/extra/httpd-acme.conf<br />
</nowiki>}}<br />
<br />
==== Obtain certificate(s) ====<br />
{{Expansion|detail lacking to successfully accomplish task being taught|section=accuracy_flag}}<br />
Request a certificate for {{ic|domain.tld}} using {{ic|/var/lib/letsencrypt/}} as public accessible path:<br />
# certbot certonly --email '''email@example.com''' --webroot -w '''/var/lib/letsencrypt/''' -d '''domain.tld'''<br />
<br />
To add a (sub)domain, include all registered domains used on the current setup:<br />
# certbot certonly --email '''email@example.com''' --webroot -w '''/var/lib/letsencrypt/''' -d '''domain.tld,sub.domain.tld'''<br />
<br />
To renew (all) the current certificate(s):<br />
# certbot renew<br />
<br />
See [[#Automatic renewal]] as alternative approach.<br />
<br />
=== Manual ===<br />
<br />
If there is no plugin for your web server, use the following command:<br />
# certbot certonly --manual<br />
<br />
When preferring to use DNS challenge (TXT record) use:<br />
# certbot certonly --manual --preferred-challenges dns<br />
<br />
This will automatically verify your domain and create a private key and certificate pair. These are placed in {{ic|/etc/letsencrypt/archive/''your.domain''/}} and symlinked from {{ic|/etc/letsencrypt/live/''your.domain''/}}.<br />
<br />
You can then manually configure your web server to reference the private key, certificate and full certificate chain in the symlinked directory.<br />
<br />
{{Note|Running this command multiple times, or renewing certificates will create multiple sets of files with a trailing number in {{ic|/etc/letsencrypt/archive/''your.domain''/}}. Certbot automatically updates the symlinks in {{ic|/etc/letsencrypt/live/''your.domain''/}} to point to the latest instances of files so there is no need to update your webserver to point to the new key material.}}<br />
<br />
== Advanced Configuration ==<br />
<br />
=== Automatic renewal ===<br />
<br />
==== systemd ====<br />
Create a [[systemd]] {{ic|certbot.service}}:<br />
{{hc|1=/etc/systemd/system/certbot.service|<br />
2=[Unit]<br />
Description=Let's Encrypt renewal<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/usr/bin/certbot renew --quiet --agree-tos<br />
TimeoutStartSec="10min"<br />
# ^ As a random delay of up to 8 minutes is preceding<br />
# any action since release v0.29.0 of certbot:<br />
# https://github.com/certbot/certbot/blob/master/CHANGELOG.md#0290---2018-12-05<br />
}}<br />
<br />
If you do not use a plugin to manage the web server configuration automatically, the web server has to be reloaded manually to reload the certificates each time they are renewed. This can be done by adding {{ic|--deploy-hook "systemctl reload nginx.service"}} to the {{ic|ExecStart}} command [https://certbot.eff.org/docs/using.html#renewing-certificates]. Of course use {{ic|httpd.service}} instead of {{ic|nginx.service}} if appropriate.<br />
<br />
{{Note|Before adding a [[systemd/Timers|timer]], check that the service is working correctly and is not trying to prompt anything. Please note that the service may take up to 480+ seconds to complete since a delay is added to calling certbot interactively since v0.29.0.}}<br />
<br />
Add a timer to check for certificate renewal twice a day and include a randomized delay so that everyone's requests for renewal will be spread over the day to lighten the Let's Encrypt server load [https://certbot.eff.org/#arch-nginx]:<br />
<br />
{{hc|1=/etc/systemd/system/certbot.timer|<br />
2=[Unit]<br />
Description=Twice daily renewal of Let's Encrypt's certificates<br />
<br />
[Timer]<br />
OnCalendar=0/12:00:00<br />
RandomizedDelaySec=1h<br />
Persistent=true<br />
<br />
[Install]<br />
WantedBy=timers.target}}<br />
<br />
[[Enable]] and [[start]] {{ic|certbot.timer}}.<br />
<br />
=== Automatic renewal for wildcard certificates ===<br />
<br />
The process is fairly simple. To issue a wildcard certificate, you have to do it via a DNS challenge request, [https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579 using the ACMEv2 protocol].<br />
<br />
While issuing a certificate manually is easy, it's not straight forward for automation. The DNS challenge represents a TXT record, given by certbot, which has to be set manually in the domain zone file.<br />
<br />
You will need to update the zone file upon every renew. To avoid doing that manually, you may use [https://tools.ietf.org/html/rfc2136 rfc2136] for which certbot has a plugin packaged in {{Pkg|certbot-dns-rfc2136}}. You will also need to configure your DNS server to allow dynamic updates for TXT records.<br />
<br />
==== Configure BIND for rfc2136 ====<br />
Generate a TSIG secret key:<br />
<br />
$ tsig-keygen -a HMAC-SHA512 '''example-key'''<br />
<br />
and add it in the configuration file:<br />
<br />
{{hc|1=/etc/named.conf|<br />
2=...<br />
zone "'''domain.ltd'''" IN {<br />
...<br />
// this is for certbot<br />
update-policy {<br />
grant '''example-key''' name _acme-challenge.'''domain.ltd'''. txt;<br />
};<br />
...<br />
};<br />
<br />
key "'''example-key'''" {<br />
algorithm hmac-sha512;<br />
secret "'''a_secret_key'''";<br />
};<br />
...}}<br />
<br />
[[Restart]] {{ic|named.service}}.<br />
<br />
==== Configure certbot for rfc2136 ====<br />
Create a configuration file for the rfc2136 plugin.<br />
<br />
{{hc|1=/etc/letsencrypt/rfc2136.ini|<br />
2=dns_rfc2136_server = '''IP.ADD.RE.SS'''<br />
dns_rfc2136_name = '''example-key'''<br />
dns_rfc2136_secret = '''INSERT_KEY_WITHOUT_QUOTES'''<br />
dns_rfc2136_algorithm = HMAC-SHA512}}<br />
<br />
Since the file contains a copy of the secret key, secure it with [[chmod]] by removing the group and others permissions.<br />
<br />
Test what we did:<br />
# certbot certonly --dns-rfc2136 --force-renewal --dns-rfc2136-credentials /etc/letsencrypt/rfc2136.ini --server https://acme-v02.api.letsencrypt.org/directory --email '''example@domain.ltd''' --agree-tos --no-eff-email -d ''''domain.ltd'''' -d ''''*.domain.ltd''''<br />
<br />
If you pass the validation successfully and receive certificates, then you are good to go with automating certbot. Otherwise, something went wrong and you need to debug your setup. It basically boils down to running {{ic|certbot renew}} from now on, see [[#Automatic renewal]].<br />
<br />
== See also ==<br />
<br />
* [[Transport Layer Security#ACME]]<br />
* [[Wikipedia:Let's Encrypt|Wikipedia article]]<br />
* [https://certbot.eff.org/ EFF's Certbot documentation]<br />
* [https://letsencrypt.org/docs/client-options/ List of ACME clients]</div>Pklaushttps://wiki.archlinux.org/index.php?title=ZFS&diff=250967ZFS2013-03-16T20:26:53Z<p>Pklaus: /* Create a storage pool */ missing -f in the command template</p>
<hr />
<div>[[Category:File systems]]<br />
{{Article summary start}}<br />
{{Article summary text|This page provides basic guidelines for installing the native ZFS Linux kernel module.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Installing Arch Linux on ZFS}}<br />
{{Article summary wiki|ZFS on FUSE}}<br />
{{Article summary end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], and a maximum [[Wikipedia:Exabyte|16 Exabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
==Installation==<br />
<br />
The ZFS kernel module is available in the [[AUR]] via {{aur|zfs}}.<br />
<br />
{{note|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
===Unofficial repository===<br />
<br />
For fast and effortless installation and updates, the [http://demizerone.com/archzfs "archzfs"] signed repository is available to add to your {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/core/$arch</nowiki><br />
}}<br />
<br />
The repository and packages are signed with the maintainer's PGP key which is verifiable here: http://demizerone.com. This key is not trusted by any of the Arch Linux master keys, so it will need to be locally signed before use. See [[pacman-key]].<br />
<br />
Add the maintainer's key,<br />
<br />
# pacman-key -r 0EE7A126<br />
<br />
and locally sign to add it to the system's trust database,<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Once the key has been signed, it is now possible to update the package database,<br />
<br />
# pacman -Syy<br />
<br />
and install ZFS packages:<br />
<br />
# pacman -S archzfs<br />
<br />
===Archzfs testing repository===<br />
<br />
If you have the testing repository active in {{ic|pacman.conf}} then it is possible to use the archzfs repository that tracks the testing kernel.<br />
<br />
{{hc|# /etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/testing/$arch</nowiki><br />
}}<br />
<br />
===Archiso tracking repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. To use this repository from the live environment, add the following server line to pacman.conf:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/archiso/$arch</nowiki><br />
}}<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators, therefore configuring ZFS is very straight forward. Configuration is done primarily with two commands, {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===mkinitramfs hook===<br />
<br />
If you are using ZFS on your root filesystem, then you will need to add the ZFS hook to [[Mkinitcpio|mkinitcpio.conf]]. If you are not using ZFS for your root filesystem, then you do not need to add the ZFS hook.<br />
<br />
You will need to change your [[kernel parameters]] to include the dataset you want to boot. You can use <code>zfs=bootfs</code> to use the ZFS bootfs (set via <code>zpool set bootfs=rpool/ROOT/arch rpool</code>) or you can set the [[kernel parameters]] to <code>zfs=<pool>/<dataset></code> to boot directly from a ZFS dataset.<br />
<br />
To see all available options for the ZFS hook:<br />
<br />
$ mkinitcpio -H zfs<br />
<br />
To use the mkinitcpio hook, you will need to add <code>zfs</code> to your <code>HOOKS</code> in <code>/etc/mkinitcpio.conf</code>:<br />
<br />
{{hc|/etc/mkinitcpio.conf|<br />
...<br />
HOOKS<nowiki>="base udev autodetect modconf encrypt zfs filesystems usbinput"</nowiki><br />
...<br />
}}<br />
<br />
{{note|It is not necessary to use the "fsck" hook with ZFS. ZFS automatically fixes any errors that occur within the filesystem. However, if the hook is required for another filesystem used on the system, such as ext4, the current ZFS packaging implementation does not yet properly handle fsck requests from mkinitcpio and an error is produced when generating a new ramdisk.}}<br />
<br />
It is important to place this after any hooks which are needed to prepare the drive before it is mounted. For example, if your ZFS volume is encrypted, then you will need to place encrypt before the zfs hook to unlock it first.<br />
<br />
Recreate the ramdisk<br />
<br />
# mkinitcpio -p linux<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount your zpool in {{ic|/etc/fstab}}; the zfs daemon imports and mounts one zfs pool automatically. The daemon mounts a zfs pool reading the file {{ic|/etc/zfs/zpool.cache}}, so the zfs pool that you want to automatically mounted must write the file there.<br />
<br />
Set a pool as to be automatically mounted by the zfs daemon:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time<br />
<br />
# systemctl enable zfs.service<br />
<br />
To manually start the daemon<br />
<br />
# systemctl start zfs.service<br />
<br />
==Initscripts==<br />
Add zfs to DAEMONS list<br />
<br />
{{hc|/etc/rc.conf|<br />
...<br />
DAEMONS<nowiki>=(... @syslog-ng zfs dbus ...)</nowiki><br />
...<br />
}}<br />
<br />
And now start the daemon if it is not started already<br />
<br />
# rc.d start zfs<br />
<br />
===Create a storage pool===<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary to partition your drives before creating the zfs filesystem, this will be done automatically. However, if you feel the need to completely wipe your drive before creating the filesystem, this can be easily done with the dd command.<br />
<br />
# dd if=/dev/zero of=/dev/<device><br />
<br />
It should not have to be stated, but be careful with this command!<br />
<br />
Once you have the list of drives, it is now time to get the id's of the drives you will be using. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's for your device, simply<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than your pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool. Change it to whatever you like.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that you want to include into your pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that you pool is mounted. Using {{ic|# zpool status}} will show that your pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot your computer to make sure your ZFS pool is mounted at boot. It is best to deal with all errors before transferring your data.<br />
<br />
== Usage ==<br />
<br />
To see all the commands available in ZFS, use<br />
<br />
$ man zfs<br />
<br />
or<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub your pool<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in your root crontab<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of your ZFS storage pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about your ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool.<br />
<br />
# zpool destroy <pool><br />
<br />
and now when checking the status<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of your pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If you are going to use the pool in a different system, or are doing<br />
<br />
=== Swap partition ===<br />
<br />
zfs does not allows to use swapfiles, but you can use a zfs volume as swap partition. It is importart to set the ZVOL block size to match the system page size, for x86_64 systems that is 4k<br />
<br />
Create a 8gb zfs volume<br />
# zfs create -V 8gb -b 4K <pool>/swap<br />
<br />
Prepare it as swap partition<br />
# mkswap /dev/zvol/<pool>/maindisk/swap<br />
<br />
Enable swap<br />
# swapon /dev/zvol/<pool>/maindisk/swap<br />
<br />
To make it permament you need to edit your {{ic|/etc/fstab}}<br />
<br />
Add a line to {{ic|/etc/fstab}}<br />
/dev/zvol/<pool>/swap none swap defaults 0 0 <br />
<br />
==Troubleshooting==<br />
<br />
=== does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
You can always ignore the check adding {{ic|zfs_force&#61;1}} in your [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
First of all double check you actually exported the pool correctly. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means your hostid is not yet correctly set in the early boot phase and it confuses zfs. So you have to manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down your hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
Follow the previous section to set it.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into your ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up your network<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install your favorite text editor<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/archiso/$arch</nowiki>}}<br />
<br />
Sync the pacman package database<br />
<br />
# pacman -Syy<br />
<br />
Install the ZFS package group<br />
<br />
# pacman -S archzfs<br />
<br />
Load the ZFS kernel modules<br />
<br />
# modprobe zfs<br />
<br />
Import your pool<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount your boot partitions (if you have them)<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into your zfs filesystem<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check your kernel version<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, you will need to run depmod (in the chroot) with the correct kernel version of your chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in your chroot installation.<br />
<br />
Regenerate your ramdisk<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]</div>Pklaushttps://wiki.archlinux.org/index.php?title=ZFS&diff=250966ZFS2013-03-16T20:25:26Z<p>Pklaus: /* Create a storage pool */ restructuring the explanation for `zpool` and its example command</p>
<hr />
<div>[[Category:File systems]]<br />
{{Article summary start}}<br />
{{Article summary text|This page provides basic guidelines for installing the native ZFS Linux kernel module.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Installing Arch Linux on ZFS}}<br />
{{Article summary wiki|ZFS on FUSE}}<br />
{{Article summary end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], and a maximum [[Wikipedia:Exabyte|16 Exabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
==Installation==<br />
<br />
The ZFS kernel module is available in the [[AUR]] via {{aur|zfs}}.<br />
<br />
{{note|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
===Unofficial repository===<br />
<br />
For fast and effortless installation and updates, the [http://demizerone.com/archzfs "archzfs"] signed repository is available to add to your {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/core/$arch</nowiki><br />
}}<br />
<br />
The repository and packages are signed with the maintainer's PGP key which is verifiable here: http://demizerone.com. This key is not trusted by any of the Arch Linux master keys, so it will need to be locally signed before use. See [[pacman-key]].<br />
<br />
Add the maintainer's key,<br />
<br />
# pacman-key -r 0EE7A126<br />
<br />
and locally sign to add it to the system's trust database,<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Once the key has been signed, it is now possible to update the package database,<br />
<br />
# pacman -Syy<br />
<br />
and install ZFS packages:<br />
<br />
# pacman -S archzfs<br />
<br />
===Archzfs testing repository===<br />
<br />
If you have the testing repository active in {{ic|pacman.conf}} then it is possible to use the archzfs repository that tracks the testing kernel.<br />
<br />
{{hc|# /etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/testing/$arch</nowiki><br />
}}<br />
<br />
===Archiso tracking repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. To use this repository from the live environment, add the following server line to pacman.conf:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/archiso/$arch</nowiki><br />
}}<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators, therefore configuring ZFS is very straight forward. Configuration is done primarily with two commands, {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===mkinitramfs hook===<br />
<br />
If you are using ZFS on your root filesystem, then you will need to add the ZFS hook to [[Mkinitcpio|mkinitcpio.conf]]. If you are not using ZFS for your root filesystem, then you do not need to add the ZFS hook.<br />
<br />
You will need to change your [[kernel parameters]] to include the dataset you want to boot. You can use <code>zfs=bootfs</code> to use the ZFS bootfs (set via <code>zpool set bootfs=rpool/ROOT/arch rpool</code>) or you can set the [[kernel parameters]] to <code>zfs=<pool>/<dataset></code> to boot directly from a ZFS dataset.<br />
<br />
To see all available options for the ZFS hook:<br />
<br />
$ mkinitcpio -H zfs<br />
<br />
To use the mkinitcpio hook, you will need to add <code>zfs</code> to your <code>HOOKS</code> in <code>/etc/mkinitcpio.conf</code>:<br />
<br />
{{hc|/etc/mkinitcpio.conf|<br />
...<br />
HOOKS<nowiki>="base udev autodetect modconf encrypt zfs filesystems usbinput"</nowiki><br />
...<br />
}}<br />
<br />
{{note|It is not necessary to use the "fsck" hook with ZFS. ZFS automatically fixes any errors that occur within the filesystem. However, if the hook is required for another filesystem used on the system, such as ext4, the current ZFS packaging implementation does not yet properly handle fsck requests from mkinitcpio and an error is produced when generating a new ramdisk.}}<br />
<br />
It is important to place this after any hooks which are needed to prepare the drive before it is mounted. For example, if your ZFS volume is encrypted, then you will need to place encrypt before the zfs hook to unlock it first.<br />
<br />
Recreate the ramdisk<br />
<br />
# mkinitcpio -p linux<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount your zpool in {{ic|/etc/fstab}}; the zfs daemon imports and mounts one zfs pool automatically. The daemon mounts a zfs pool reading the file {{ic|/etc/zfs/zpool.cache}}, so the zfs pool that you want to automatically mounted must write the file there.<br />
<br />
Set a pool as to be automatically mounted by the zfs daemon:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time<br />
<br />
# systemctl enable zfs.service<br />
<br />
To manually start the daemon<br />
<br />
# systemctl start zfs.service<br />
<br />
==Initscripts==<br />
Add zfs to DAEMONS list<br />
<br />
{{hc|/etc/rc.conf|<br />
...<br />
DAEMONS<nowiki>=(... @syslog-ng zfs dbus ...)</nowiki><br />
...<br />
}}<br />
<br />
And now start the daemon if it is not started already<br />
<br />
# rc.d start zfs<br />
<br />
===Create a storage pool===<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary to partition your drives before creating the zfs filesystem, this will be done automatically. However, if you feel the need to completely wipe your drive before creating the filesystem, this can be easily done with the dd command.<br />
<br />
# dd if=/dev/zero of=/dev/<device><br />
<br />
It should not have to be stated, but be careful with this command!<br />
<br />
Once you have the list of drives, it is now time to get the id's of the drives you will be using. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's for your device, simply<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now finally, create the ZFS pool:<br />
<br />
# zpool create -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than your pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool. Change it to whatever you like.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that you want to include into your pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that you pool is mounted. Using {{ic|# zpool status}} will show that your pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot your computer to make sure your ZFS pool is mounted at boot. It is best to deal with all errors before transferring your data.<br />
<br />
== Usage ==<br />
<br />
To see all the commands available in ZFS, use<br />
<br />
$ man zfs<br />
<br />
or<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub your pool<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in your root crontab<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of your ZFS storage pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about your ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool.<br />
<br />
# zpool destroy <pool><br />
<br />
and now when checking the status<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of your pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If you are going to use the pool in a different system, or are doing<br />
<br />
=== Swap partition ===<br />
<br />
zfs does not allows to use swapfiles, but you can use a zfs volume as swap partition. It is importart to set the ZVOL block size to match the system page size, for x86_64 systems that is 4k<br />
<br />
Create a 8gb zfs volume<br />
# zfs create -V 8gb -b 4K <pool>/swap<br />
<br />
Prepare it as swap partition<br />
# mkswap /dev/zvol/<pool>/maindisk/swap<br />
<br />
Enable swap<br />
# swapon /dev/zvol/<pool>/maindisk/swap<br />
<br />
To make it permament you need to edit your {{ic|/etc/fstab}}<br />
<br />
Add a line to {{ic|/etc/fstab}}<br />
/dev/zvol/<pool>/swap none swap defaults 0 0 <br />
<br />
==Troubleshooting==<br />
<br />
=== does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
You can always ignore the check adding {{ic|zfs_force&#61;1}} in your [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
First of all double check you actually exported the pool correctly. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means your hostid is not yet correctly set in the early boot phase and it confuses zfs. So you have to manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down your hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
Follow the previous section to set it.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into your ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up your network<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install your favorite text editor<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[archzfs]<br />
Server = http://demizerone.com/$repo/archiso/$arch</nowiki>}}<br />
<br />
Sync the pacman package database<br />
<br />
# pacman -Syy<br />
<br />
Install the ZFS package group<br />
<br />
# pacman -S archzfs<br />
<br />
Load the ZFS kernel modules<br />
<br />
# modprobe zfs<br />
<br />
Import your pool<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount your boot partitions (if you have them)<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into your zfs filesystem<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check your kernel version<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, you will need to run depmod (in the chroot) with the correct kernel version of your chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in your chroot installation.<br />
<br />
Regenerate your ramdisk<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]</div>Pklaushttps://wiki.archlinux.org/index.php?title=Multi_Router_Traffic_Grapher&diff=199959Multi Router Traffic Grapher2012-05-01T23:54:47Z<p>Pklaus: Minor note about SNMP configuration</p>
<hr />
<div>[[Category:Networking]]<br />
{{i18n|Mrtg}}<br />
{{Expansion}}<br />
== Server Setup ==<br />
This document assumes that you already have a [https://wiki.archlinux.org/index.php/Apache_and_FastCGI Apache ] and [https://wiki.archlinux.org/index.php/Snmpd net-snmp] working and configured properly<br />
<br />
The following should all be performed as root.<br />
<br />
* Install the necessary programs<br />
# pacman -S mrtg perl-net-snmp<br />
<br />
* create an mrtg user <br />
# useradd -d /srv/http/mrtg mrtg<br />
<br />
* create the user home directory and change the owner ship to the user<br />
# mkdir /srv/http/mrtg/<br />
# chown mrtg:mrtg /srv/http/mrtg<br />
<br />
<br />
== Apache configuration ==<br />
<br />
As far as the Apache configuration is concerned, we simply need to add an alias which is directed to the HTML files locations :<br />
<br />
The configuration should look like this :<br />
<br />
<br />
Alias /mrtg /srv/http/mrtg/html/<br />
<Directory "/srv/http/mrtg/html/"><br />
AllowOverride None<br />
Options None<br />
DirectoryIndex index.html<br />
Order allow,deny<br />
Allow from all<br />
</Directory><br />
<br />
== MRTG Setup ==<br />
There are many ways to configure the mrtg for your local server. Here, the easiest way to expand the application for other server and network appliances is described if needed.<br />
<br />
The following should all be performed as the mrtg user we created.<br />
<br />
* create an HTML directory to hold the png files and the index.html file<br />
# mkdir /srv/http/mrtg/html<br />
<br />
now we will begin dealing with the application scripts<br />
first we will create a basic mrtg.cfg file<br />
<br />
* The following script call will scan our localhost for its interfaces and create for us the relevant configuration for each interface; ''public'' is the community name set for the local SNMP access:<br />
# cfgmaker --output=/srv/http/mrtg/mrtg.cfg --ifref=name --ifref=descr --global "WorkDir: /srv/http/mrtg" public@localhost<br />
:* the mrtg.cfg files contains all the server interfaces. we do not need the "lo" interface so we are going to delete it and edit the global configuration<br />
<br />
== mrtg.cfg Global configuration ==<br />
<br />
remove the lines that are irrelevant to the interface and add the fallowing lines at the top:<br />
<br />
### Global configuration ###<br />
<br />
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt<br />
EnableIPv6: no<br />
HtmlDir: /srv/http/mrtg/html<br />
ImageDir: /srv/http/mrtg/html<br />
LogDir: /srv/http/mrtg<br />
ThreshDir: /srv/http/mrtg<br />
RunAsDaemon: Yes<br />
Interval: 5<br />
Refresh: 600<br />
<br />
<br />
the global configuration lines mean :<br />
<br />
:1) to load the Linux MIB in mrtg<br />
:2) to enable/disable IPv6<br />
:3) HTML home directory<br />
:4) the png files home directory<br />
:5) the log dir files locations<br />
:6) the Thresh folder<br />
:7) whether or not we want to run the application as a daemon , in this case : yes<br />
:8) the daemon interval (minimum 5 min)<br />
:9) the interval to refresh the HTML files<br />
<br />
<br />
== Resource Monitoring ==<br />
<br />
Now that we have the global configuration set we need to add the resources and devices we want to monitor.<br />
<br />
in this tutorial we are going to monitor:<br />
<br />
:1)CPU<br />
:2)Memory Usage<br />
:3)swap<br />
:4)Number of Processes<br />
:5)Total TCP Established Connections<br />
:6)Users Count<br />
:7)the server mount points<br />
:8)the server interfaces<br />
<br />
=== CPU Monitoring ===<br />
<br />
for monitoring the CPU we need to add the next lines :<br />
<br />
Target[localhost.cpu]:ssCpuRawUser.0&ssCpuRawUser.0:public@127.0.0.1 + ssCpuRawSystem.0&ssCpuRawSystem.0:public@127.0.0.1 +\ <br />
ssCpuRawNice.0&ssCpuRawNice.0:public@127.0.0.1<br />
RouterUptime[localhost.cpu]: public@127.0.0.1<br />
MaxBytes[localhost.cpu]: 100<br />
Title[localhost.cpu]: CPU Load<br />
PageTop[localhost.cpu]: Active CPU Load %<br />
Unscaled[localhost.cpu]: ymwd<br />
ShortLegend[localhost.cpu]: %<br />
YLegend[localhost.cpu]: CPU Utilization<br />
Legend1[localhost.cpu]: Active CPU in % (Load)<br />
Legend2[localhost.cpu]:<br />
Legend3[localhost.cpu]:<br />
Legend4[localhost.cpu]:<br />
LegendI[localhost.cpu]: Active<br />
LegendO[localhost.cpu]:<br />
Options[localhost.cpu]: growright,nopercent<br />
<br />
<br />
=== Memory usage ===<br />
<br />
to monitor the memory usage we need to add the next lines :<br />
<br />
# get memory Usage<br />
Target[localhost.memtotal]: ( .1.3.6.1.4.1.2021.4.5.0&.1.3.6.1.4.1.2021.4.5.0:public@localhost ) - \<br />
( .1.3.6.1.4.1.2021.4.6.0&.1.3.6.1.4.1.2021.4.6.0:public@localhost )<br />
PageTop[localhost.memtotal]: Memory Usage<br />
Options[localhost.memtotal]: nopercent,growright,gauge<br />
Title[localhost.memtotal]: Memory Usage<br />
MaxBytes[localhost.memtotal]: 100000000<br />
kMG[localhost.memtotal]: k,M,G,T,P,X<br />
YLegend[localhost.memtotal]: bytes<br />
ShortLegend[localhost.memtotal]: bytes<br />
LegendI[localhost.memtotal]: Memory Usage: <br />
LegendO[localhost.memtotal]:<br />
Legend1[localhost.memtotal]: Memory Usage, not including swap, in bytes<br />
Colours[localhost.memtotal]: Blue#1000ff, Black#000000, Gray#CCCCCC, Yellow#FFFF00<br />
<br />
=== Swap Usage ===<br />
<br />
for swap usage add the following lines :<br />
# get swap memory<br />
Target[localhost.swap]:( .1.3.6.1.4.1.2021.4.3.0&.1.3.6.1.4.1.2021.4.3.0:public@localhost ) - \<br />
( .1.3.6.1.4.1.2021.4.4.0&.1.3.6.1.4.1.2021.4.4.0:public@localhost)<br />
PageTop[localhost.swap]: Swap Usage<br />
Options[localhost.swap]: nopercent,growright,gauge,noinfo<br />
Title[localhost.swap]: Swap Usage<br />
MaxBytes[localhost.swap]: 100000000 <br />
kMG[localhost.swap]: k,M,G,T,P,X<br />
YLegend[localhost.swap]: bytes<br />
ShortLegend[localhost.swap]: bytes<br />
LegendI[localhost.swap]: Swap Usage:<br />
LegendO[localhost.swap]:<br />
Legend1[localhost.swap]: Swap memory avail, in bytes<br />
Colours[localhost.swap]: Blue#1000ff,Violet#ff00ff,Black#000000, Gray#CCCCCC<br />
<br />
in the title section some calculation are made. MRTG knows to calculate the values given from the OID<br />
<br />
=== number of processes ===<br />
<br />
for getting the number of processes running we are doing some unique here<br />
<br />
# get number of processes running<br />
Target[localhost.procs]: `/usr/local/mrtg/linux_porc.pl`<br />
Title[localhost.procs]: Process Statistics<br />
PageTop[localhost.procs]: Process Statistics<br />
MaxBytes[localhost.procs]: 10000<br />
YLegend[localhost.procs]: Processes <br />
LegendI[localhost.procs]: &nbsp; Blocked Processes:<br />
LegendO[localhost.procs]: &nbsp; Run Queue:<br />
Legend1[localhost.procs]: Number of Blocked Processes <br />
Legend2[localhost.procs]: Number of Processes in Run Queue<br />
Legend3[localhost.procs]: Maximal Blocked Processes<br />
Legend4[localhost.procs]: Maximal Processes in Run Queue<br />
Options[localhost.procs]: growright, integer, nopercent, gauge<br />
<br />
<br />
as we can see here we are calling the command linux_proc.pl that was written in Perl and returns an Integer<br />
which presence the number of processes.<br />
<br />
the content of the command is :<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"ps -ef | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== established connections ===<br />
<br />
in order to get a graph about established connections we are doing the way as the privies section :<br />
<br />
# get number of established connections<br />
Target[localhost.estconn]: `/usr/local/mrtg/linux_estconn.pl`<br />
Title[localhost.estconn]: Established connections<br />
PageTop[localhost.estconn]: Established connections<br />
MaxBytes[localhost.estconn]: 100000<br />
YLegend[localhost.estconn]: Established connections<br />
LegendI[localhost.estconn]: &nbsp; Established connections: <br />
Legend0[localhost.estconn]: Number of Established connections: <br />
Options[localhost.estconn]: growright, integer, nopercent, gauge<br />
Colours[localhost.estconn]: Red#FF0000,Blue#0066CC,Black#000000, White#FFFFFF<br />
<br />
<br />
the content of the file linux_estconn.pl is :<br />
<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"netstat -an | grep ESTABLISHED | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== users count ===<br />
<br />
for the users count once again we are using a Perl script to create an integer output<br />
<br />
for the mrtg configuration we need to add :<br />
<br />
<br />
# get number of current users<br />
Target[localhost.users]: `/usr/local/mrtg/linux_users.pl`<br />
Title[localhost.users]: logged in users<br />
PageTop[localhost.users]: number of users<br />
MaxBytes[localhost.users]: 100000<br />
YLegend[localhost.users]: users count <br />
Legend0[localhost.users]: logged in users count: <br />
Options[localhost.users]: growright, integer, nopercent, gauge<br />
Colours[localhost.users]: Red#FF0000,White#FFFFFF,Blue#0066CC,Black#000000<br />
<br />
<br />
the linux_users.pl file content is :<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"w | grep -v load | grep -v USER | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== monitor mount points ===<br />
<br />
in order to monitor mount points we first need to make sure that SNMP is sending us the relevant information<br />
to check the mount point OID we need first to see all the mount points by the command :<br />
<br />
snmpwalk -v 2c -c public localhost mount<br />
<br />
this will display all of the server mount points and there mount location.<br />
<br />
to monitor the mount point we want we need to take the last octet from the result and<br />
add it to the next 2 OID's<br />
<br />
.1.3.6.1.4.1.2021.9.1.8.<br />
.1.3.6.1.4.1.2021.9.1.6.<br />
<br />
so the mrtg.cfg section for the root FS will look like this :<br />
<br />
<br />
# monitor root FS <br />
Target[localhost.rootfs]: .1.3.6.1.4.1.2021.9.1.8.1&.1.3.6.1.4.1.2021.9.1.6.1:public@localhost<br />
PageTop[localhost.rootfs]: Root FS Usage<br />
Options[localhost.rootfs]: nopercent,growright,gauge,noinfo<br />
Title[localhost.rootfs]: Root FS Usage<br />
MaxBytes[localhost.rootfs]: 100000000<br />
YLegend[localhost.rootfs]: Giga bytes<br />
ShortLegend[localhost.rootfs]: bytes<br />
LegendI[localhost.rootfs]: Root FS Usage:<br />
Colours[localhost.rootfs]: Yellow#FFFF00, White#FFFFFF, Gray#CCCCCC, Blue#1000ff<br />
<br />
=== Server interface ===<br />
<br />
the server interface is outomaticly generated when we run the "cfgmaker" command.<br />
<br />
== Startup script ==<br />
<br />
If you want the MRTG daemon to start at boot add the next startup script:<br />
<br />
vi /etc/rc.d/mrtg<br />
<br />
<nowiki>#!/bin/bash <br />
. /etc/rc.conf<br />
. /etc/rc.d/functions<br />
LENG=C<br />
USER=mrtg<br />
MRTG=/usr/bin/mrtg<br />
MRTGCFG=/srv/http/mrtg/mrtg.cfg<br />
daemon_name=mrtg<br />
Start() {<br />
stat_busy "starting the MRTG Daemon"<br />
su - ${USER} -c "env LENG=${LANG} ${MRTG} ${MRTGCFG} > /dev/null"<br />
RETVAL=$?;<br />
if [[ $RETVAL -eq 0 ]]; then<br />
add_daemon $daemon_name<br />
stat_done<br />
else<br />
stat_fail<br />
exit 1<br />
fi<br />
}<br />
Stop() {<br />
stat_busy "Stopping the MRTG Daemon"<br />
PID=`ps -ef | grep mrtg.cfg | grep -v grep | awk '{print $2}'`<br />
if [[ ! -z ${PID} ]]; then<br />
kill ${PID}<br />
RETVAL=$?;<br />
if [[ $RETVAL -eq 0 ]]; then<br />
rm_daemon $daemon_name<br />
stat_done<br />
else<br />
stat_fail<br />
exit 1<br />
fi<br />
fi<br />
}<br />
case "$1" in<br />
start)<br />
Start;<br />
;;<br />
stop)<br />
Stop;<br />
;;<br />
restart)<br />
Stop;<br />
Start;<br />
;;<br />
*)<br />
echo "Usage: mrtg {start|stop|restart}";<br />
;;<br />
esac</nowiki></div>Pklaushttps://wiki.archlinux.org/index.php?title=Multi_Router_Traffic_Grapher&diff=199958Multi Router Traffic Grapher2012-05-01T23:54:08Z<p>Pklaus: Some typos corrected + case corrections</p>
<hr />
<div>[[Category:Networking]]<br />
{{i18n|Mrtg}}<br />
{{Expansion}}<br />
== Server Setup ==<br />
This document assumes that you already have a [https://wiki.archlinux.org/index.php/Apache_and_FastCGI Apache ] and [https://wiki.archlinux.org/index.php/Snmpd net-snmp] working and configured properly<br />
<br />
The following should all be performed as root.<br />
<br />
* Install the necessary programs<br />
# pacman -S mrtg perl-net-snmp<br />
<br />
* create an mrtg user <br />
# useradd -d /srv/http/mrtg mrtg<br />
<br />
* create the user home directory and change the owner ship to the user<br />
# mkdir /srv/http/mrtg/<br />
# chown mrtg:mrtg /srv/http/mrtg<br />
<br />
<br />
== Apache configuration ==<br />
<br />
As far as the Apache configuration is concerned, we simply need to add an alias which is directed to the HTML files locations :<br />
<br />
The configuration should look like this :<br />
<br />
<br />
Alias /mrtg /srv/http/mrtg/html/<br />
<Directory "/srv/http/mrtg/html/"><br />
AllowOverride None<br />
Options None<br />
DirectoryIndex index.html<br />
Order allow,deny<br />
Allow from all<br />
</Directory><br />
<br />
== MRTG Setup ==<br />
There are many ways to configure the mrtg for your local server. Here, the easiest way to expand the application for other server and network appliances is described if needed.<br />
<br />
The following should all be performed as the mrtg user we created.<br />
<br />
* create an HTML directory to hold the png files and the index.html file<br />
# mkdir /srv/http/mrtg/html<br />
<br />
now we will begin dealing with the application scripts<br />
first we will create a basic mrtg.cfg file<br />
<br />
* The following script call will scan our localhost for its interfaces and create for us the relevant configuration for each interface<br />
# cfgmaker --output=/srv/http/mrtg/mrtg.cfg --ifref=name --ifref=descr --global "WorkDir: /srv/http/mrtg" public@localhost<br />
:* the mrtg.cfg files contains all the server interfaces. we do not need the "lo" interface so we are going to delete it and edit the global configuration<br />
<br />
== mrtg.cfg Global configuration ==<br />
<br />
remove the lines that are irrelevant to the interface and add the fallowing lines at the top:<br />
<br />
### Global configuration ###<br />
<br />
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt<br />
EnableIPv6: no<br />
HtmlDir: /srv/http/mrtg/html<br />
ImageDir: /srv/http/mrtg/html<br />
LogDir: /srv/http/mrtg<br />
ThreshDir: /srv/http/mrtg<br />
RunAsDaemon: Yes<br />
Interval: 5<br />
Refresh: 600<br />
<br />
<br />
the global configuration lines mean :<br />
<br />
:1) to load the Linux MIB in mrtg<br />
:2) to enable/disable IPv6<br />
:3) HTML home directory<br />
:4) the png files home directory<br />
:5) the log dir files locations<br />
:6) the Thresh folder<br />
:7) whether or not we want to run the application as a daemon , in this case : yes<br />
:8) the daemon interval (minimum 5 min)<br />
:9) the interval to refresh the HTML files<br />
<br />
<br />
== Resource Monitoring ==<br />
<br />
Now that we have the global configuration set we need to add the resources and devices we want to monitor.<br />
<br />
in this tutorial we are going to monitor:<br />
<br />
:1)CPU<br />
:2)Memory Usage<br />
:3)swap<br />
:4)Number of Processes<br />
:5)Total TCP Established Connections<br />
:6)Users Count<br />
:7)the server mount points<br />
:8)the server interfaces<br />
<br />
=== CPU Monitoring ===<br />
<br />
for monitoring the CPU we need to add the next lines :<br />
<br />
Target[localhost.cpu]:ssCpuRawUser.0&ssCpuRawUser.0:public@127.0.0.1 + ssCpuRawSystem.0&ssCpuRawSystem.0:public@127.0.0.1 +\ <br />
ssCpuRawNice.0&ssCpuRawNice.0:public@127.0.0.1<br />
RouterUptime[localhost.cpu]: public@127.0.0.1<br />
MaxBytes[localhost.cpu]: 100<br />
Title[localhost.cpu]: CPU Load<br />
PageTop[localhost.cpu]: Active CPU Load %<br />
Unscaled[localhost.cpu]: ymwd<br />
ShortLegend[localhost.cpu]: %<br />
YLegend[localhost.cpu]: CPU Utilization<br />
Legend1[localhost.cpu]: Active CPU in % (Load)<br />
Legend2[localhost.cpu]:<br />
Legend3[localhost.cpu]:<br />
Legend4[localhost.cpu]:<br />
LegendI[localhost.cpu]: Active<br />
LegendO[localhost.cpu]:<br />
Options[localhost.cpu]: growright,nopercent<br />
<br />
<br />
=== Memory usage ===<br />
<br />
to monitor the memory usage we need to add the next lines :<br />
<br />
# get memory Usage<br />
Target[localhost.memtotal]: ( .1.3.6.1.4.1.2021.4.5.0&.1.3.6.1.4.1.2021.4.5.0:public@localhost ) - \<br />
( .1.3.6.1.4.1.2021.4.6.0&.1.3.6.1.4.1.2021.4.6.0:public@localhost )<br />
PageTop[localhost.memtotal]: Memory Usage<br />
Options[localhost.memtotal]: nopercent,growright,gauge<br />
Title[localhost.memtotal]: Memory Usage<br />
MaxBytes[localhost.memtotal]: 100000000<br />
kMG[localhost.memtotal]: k,M,G,T,P,X<br />
YLegend[localhost.memtotal]: bytes<br />
ShortLegend[localhost.memtotal]: bytes<br />
LegendI[localhost.memtotal]: Memory Usage: <br />
LegendO[localhost.memtotal]:<br />
Legend1[localhost.memtotal]: Memory Usage, not including swap, in bytes<br />
Colours[localhost.memtotal]: Blue#1000ff, Black#000000, Gray#CCCCCC, Yellow#FFFF00<br />
<br />
=== Swap Usage ===<br />
<br />
for swap usage add the following lines :<br />
# get swap memory<br />
Target[localhost.swap]:( .1.3.6.1.4.1.2021.4.3.0&.1.3.6.1.4.1.2021.4.3.0:public@localhost ) - \<br />
( .1.3.6.1.4.1.2021.4.4.0&.1.3.6.1.4.1.2021.4.4.0:public@localhost)<br />
PageTop[localhost.swap]: Swap Usage<br />
Options[localhost.swap]: nopercent,growright,gauge,noinfo<br />
Title[localhost.swap]: Swap Usage<br />
MaxBytes[localhost.swap]: 100000000 <br />
kMG[localhost.swap]: k,M,G,T,P,X<br />
YLegend[localhost.swap]: bytes<br />
ShortLegend[localhost.swap]: bytes<br />
LegendI[localhost.swap]: Swap Usage:<br />
LegendO[localhost.swap]:<br />
Legend1[localhost.swap]: Swap memory avail, in bytes<br />
Colours[localhost.swap]: Blue#1000ff,Violet#ff00ff,Black#000000, Gray#CCCCCC<br />
<br />
in the title section some calculation are made. MRTG knows to calculate the values given from the OID<br />
<br />
=== number of processes ===<br />
<br />
for getting the number of processes running we are doing some unique here<br />
<br />
# get number of processes running<br />
Target[localhost.procs]: `/usr/local/mrtg/linux_porc.pl`<br />
Title[localhost.procs]: Process Statistics<br />
PageTop[localhost.procs]: Process Statistics<br />
MaxBytes[localhost.procs]: 10000<br />
YLegend[localhost.procs]: Processes <br />
LegendI[localhost.procs]: &nbsp; Blocked Processes:<br />
LegendO[localhost.procs]: &nbsp; Run Queue:<br />
Legend1[localhost.procs]: Number of Blocked Processes <br />
Legend2[localhost.procs]: Number of Processes in Run Queue<br />
Legend3[localhost.procs]: Maximal Blocked Processes<br />
Legend4[localhost.procs]: Maximal Processes in Run Queue<br />
Options[localhost.procs]: growright, integer, nopercent, gauge<br />
<br />
<br />
as we can see here we are calling the command linux_proc.pl that was written in Perl and returns an Integer<br />
which presence the number of processes.<br />
<br />
the content of the command is :<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"ps -ef | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== established connections ===<br />
<br />
in order to get a graph about established connections we are doing the way as the privies section :<br />
<br />
# get number of established connections<br />
Target[localhost.estconn]: `/usr/local/mrtg/linux_estconn.pl`<br />
Title[localhost.estconn]: Established connections<br />
PageTop[localhost.estconn]: Established connections<br />
MaxBytes[localhost.estconn]: 100000<br />
YLegend[localhost.estconn]: Established connections<br />
LegendI[localhost.estconn]: &nbsp; Established connections: <br />
Legend0[localhost.estconn]: Number of Established connections: <br />
Options[localhost.estconn]: growright, integer, nopercent, gauge<br />
Colours[localhost.estconn]: Red#FF0000,Blue#0066CC,Black#000000, White#FFFFFF<br />
<br />
<br />
the content of the file linux_estconn.pl is :<br />
<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"netstat -an | grep ESTABLISHED | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== users count ===<br />
<br />
for the users count once again we are using a Perl script to create an integer output<br />
<br />
for the mrtg configuration we need to add :<br />
<br />
<br />
# get number of current users<br />
Target[localhost.users]: `/usr/local/mrtg/linux_users.pl`<br />
Title[localhost.users]: logged in users<br />
PageTop[localhost.users]: number of users<br />
MaxBytes[localhost.users]: 100000<br />
YLegend[localhost.users]: users count <br />
Legend0[localhost.users]: logged in users count: <br />
Options[localhost.users]: growright, integer, nopercent, gauge<br />
Colours[localhost.users]: Red#FF0000,White#FFFFFF,Blue#0066CC,Black#000000<br />
<br />
<br />
the linux_users.pl file content is :<br />
<br />
#!/usr/bin/perl<br />
open(COMD,"w | grep -v load | grep -v USER | wc -l|");<br />
$num = <COMD>;<br />
close(COMD);<br />
<br />
print int($num);<br />
<br />
=== monitor mount points ===<br />
<br />
in order to monitor mount points we first need to make sure that SNMP is sending us the relevant information<br />
to check the mount point OID we need first to see all the mount points by the command :<br />
<br />
snmpwalk -v 2c -c public localhost mount<br />
<br />
this will display all of the server mount points and there mount location.<br />
<br />
to monitor the mount point we want we need to take the last octet from the result and<br />
add it to the next 2 OID's<br />
<br />
.1.3.6.1.4.1.2021.9.1.8.<br />
.1.3.6.1.4.1.2021.9.1.6.<br />
<br />
so the mrtg.cfg section for the root FS will look like this :<br />
<br />
<br />
# monitor root FS <br />
Target[localhost.rootfs]: .1.3.6.1.4.1.2021.9.1.8.1&.1.3.6.1.4.1.2021.9.1.6.1:public@localhost<br />
PageTop[localhost.rootfs]: Root FS Usage<br />
Options[localhost.rootfs]: nopercent,growright,gauge,noinfo<br />
Title[localhost.rootfs]: Root FS Usage<br />
MaxBytes[localhost.rootfs]: 100000000<br />
YLegend[localhost.rootfs]: Giga bytes<br />
ShortLegend[localhost.rootfs]: bytes<br />
LegendI[localhost.rootfs]: Root FS Usage:<br />
Colours[localhost.rootfs]: Yellow#FFFF00, White#FFFFFF, Gray#CCCCCC, Blue#1000ff<br />
<br />
=== Server interface ===<br />
<br />
the server interface is outomaticly generated when we run the "cfgmaker" command.<br />
<br />
== Startup script ==<br />
<br />
If you want the MRTG daemon to start at boot add the next startup script:<br />
<br />
vi /etc/rc.d/mrtg<br />
<br />
<nowiki>#!/bin/bash <br />
. /etc/rc.conf<br />
. /etc/rc.d/functions<br />
LENG=C<br />
USER=mrtg<br />
MRTG=/usr/bin/mrtg<br />
MRTGCFG=/srv/http/mrtg/mrtg.cfg<br />
daemon_name=mrtg<br />
Start() {<br />
stat_busy "starting the MRTG Daemon"<br />
su - ${USER} -c "env LENG=${LANG} ${MRTG} ${MRTGCFG} > /dev/null"<br />
RETVAL=$?;<br />
if [[ $RETVAL -eq 0 ]]; then<br />
add_daemon $daemon_name<br />
stat_done<br />
else<br />
stat_fail<br />
exit 1<br />
fi<br />
}<br />
Stop() {<br />
stat_busy "Stopping the MRTG Daemon"<br />
PID=`ps -ef | grep mrtg.cfg | grep -v grep | awk '{print $2}'`<br />
if [[ ! -z ${PID} ]]; then<br />
kill ${PID}<br />
RETVAL=$?;<br />
if [[ $RETVAL -eq 0 ]]; then<br />
rm_daemon $daemon_name<br />
stat_done<br />
else<br />
stat_fail<br />
exit 1<br />
fi<br />
fi<br />
}<br />
case "$1" in<br />
start)<br />
Start;<br />
;;<br />
stop)<br />
Stop;<br />
;;<br />
restart)<br />
Stop;<br />
Start;<br />
;;<br />
*)<br />
echo "Usage: mrtg {start|stop|restart}";<br />
;;<br />
esac</nowiki></div>Pklaushttps://wiki.archlinux.org/index.php?title=Python/Virtual_environment&diff=195854Python/Virtual environment2012-04-22T18:50:38Z<p>Pklaus: The --no-site-packages flag is deprecated; it is now the default behavior.</p>
<hr />
<div>[[Category:Development (English)]]<br />
{{i18n|Python VirtualEnv}}<br />
{{Out of date}}<br />
<br />
''virtualenv'' is a Python tool written by Ian Bicking and used to create isolated environments for Python in which you can install packages without interfering with the other virtualenvs nor with the system Python's packages.<br />
The present article covers the installation of the ''virtualenv'' package and its companion command line utility ''virtualenvwrapper'' designed by Doug Hellmann to (greatly) improve your work flow. A quick how-to to help you to begin working inside virtual environment is then provided.<br />
<br />
==Virtual Environments at a glance==<br />
''virtualenv'' is a tool designated to address the problem of dealing with packages' dependencies while maintaining different versions that suit projects' needs. For example, if you work on two Django web sites, say one that needs Django 1.2 while the other needs the good old 0.96. You have no way to keep both versions if you install them into /usr/lib/python2/site-packages . Thanks to virtualenv it's possible, by creating two isolated environments, to have the two development environment to play along nicely.<br />
<br />
''vitualenvwrapper'' takes ''virtualenv'' a step further by providing convenient commands you can invoke from your favorite console.<br />
<br />
== Virtualenv ==<br />
<br />
Currently ''virtualenv'' only supports Python up to version 2.7. If you really need virtual environment on Python 3, check out the [http://bitbucket.org/brandon/virtualenv3 virtualenv3] project on Bitbucket.<br />
<br />
===Installation===<br />
Simply install python-virtualenv from the community repository and you're done:<br />
# pacman -S python2-virtualenv<br />
<br />
===Basic Usage===<br />
An extended tutorial on how use ''virtualenv'' for sandboxing can be found [http://wiki.pylonshq.com/display/pylonscookbook/Using+a+Virtualenv+Sandbox here].<br />
<br />
The typical use case is:<br />
* Create a folder for the new virtualenv:<br />
$ mkdir -p ~/.virtualenvs/my_env<br />
* Create the virtualenv: <br />
$ virtualenv2 ~/.virtualenvs/my_env<br />
* Activate the virtualenv: <br />
$ source ~/.virtualenvs/my_env/bin/activate<br />
* Install some package inside the virtualenv (say, Django):<br />
(my_env)$ pip install django<br />
* Do your things<br />
* Leave the virtualenv:<br />
(my_env)$ deactivate<br />
<br />
== Virtualenvwrapper ==<br />
<br />
''virtualenvwrapper'' allows more natural command line interaction with your virtualenvs by exposing several useful commands to create, activate and remove virtualenvs. Like ''virtualenv'', this package does not currently support Python 3.x.<br />
<br />
===Installation===<br />
[[pacman|Install]] the {{Pkg|python-virtualenvwrapper}} package from the [[Official Repositories|official repositories]]. If you have not installed {{Pkg|python-virtualenv}} yet, {{Pkg|python-virtualenvwrapper}} will be installed now as a dependency.<br />
<br />
Now add the following lines to your {{ic|~/.bashrc}}:<br />
export WORKON_HOME=~/.virtualenvs<br />
source /usr/bin/virtualenvwrapper.sh<br />
<br />
Re-open your console and create the {{ic|WORKON_HOME}} folder:<br />
$ mkdir $WORKON_HOME<br />
<br />
===Basic Usage===<br />
The main information source on virtualenvwrapper usage (and extension capability) is Doug Hellmann's [http://www.doughellmann.com/docs/virtualenvwrapper/ page].<br />
<br />
* Create the virtualenv:<br />
$ mkvirtualenv -p python2.7 my_env<br />
* Activate the virtualenv:<br />
$ workon my_env<br />
* Install some package inside the virtualenv (say, Django):<br />
$ (my_env)$ pip install django<br />
* Do your things<br />
* Leave the virtualenv: <br />
(my_env)$ deactivate<br />
<br />
== See Also ==<br />
*[http://pypi.python.org/pypi/virtualenv virtualenv Pypi page]<br />
*[http://wiki.pylonshq.com/display/pylonscookbook/Using+a+Virtualenv+Sandbox Tutorial for virtualenv]<br />
*[http://www.doughellmann.com/docs/virtualenvwrapper/ virtualenvwrapper page at Doug Hellmann's]</div>Pklaushttps://wiki.archlinux.org/index.php?title=Python/Virtual_environment&diff=195791Python/Virtual environment2012-04-22T12:33:19Z<p>Pklaus: /* Basic Usage */ create ~/.virtualenvs if it didn't exist before</p>
<hr />
<div>[[Category:Development (English)]]<br />
{{i18n|Python VirtualEnv}}<br />
{{Out of date}}<br />
<br />
''virtualenv'' is a Python tool written by Ian Bicking and used to create isolated environments for Python in which you can install packages without interfering with the other virtualenvs nor with the system Python's packages.<br />
The present article covers the installation of the ''virtualenv'' package and its companion command line utility ''virtualenvwrapper'' designed by Doug Hellmann to (greatly) improve your work flow. A quick how-to to help you to begin working inside virtual environment is then provided.<br />
<br />
==Virtual Environments at a glance==<br />
''virtualenv'' is a tool designated to address the problem of dealing with packages' dependencies while maintaining different versions that suit projects' needs. For example, if you work on two Django web sites, say one that needs Django 1.2 while the other needs the good old 0.96. You have no way to keep both versions if you install them into /usr/lib/python2/site-packages . Thanks to virtualenv it's possible, by creating two isolated environments, to have the two development environment to play along nicely.<br />
<br />
''vitualenvwrapper'' takes ''virtualenv'' a step further by providing convenient commands you can invoke from your favorite console.<br />
<br />
== Virtualenv ==<br />
<br />
Currently ''virtualenv'' only supports Python up to version 2.7. If you really need virtual environment on Python 3, check out the [http://bitbucket.org/brandon/virtualenv3 virtualenv3] project on Bitbucket.<br />
<br />
===Installation===<br />
Simply install python-virtualenv from the community repository and you're done:<br />
# pacman -S python2-virtualenv<br />
<br />
===Basic Usage===<br />
An extended tutorial on how use ''virtualenv'' for sandboxing can be found [http://wiki.pylonshq.com/display/pylonscookbook/Using+a+Virtualenv+Sandbox here].<br />
<br />
The typical use case is:<br />
* Create a folder for the new virtualenv:<br />
$ mkdir -p ~/.virtualenvs/my_env<br />
* Create the virtualenv, here without package inheritance from the system's installation: <br />
$ virtualenv2 --no-site-packages ~/.virtualenvs/my_env<br />
* Activate the virtualenv: <br />
$ source ~/.virtualenvs/my_env/bin/activate<br />
* Install some package inside the virtualenv (say, Django):<br />
(my_env)$ pip install django<br />
* Do your things<br />
* Leave the virtualenv:<br />
(my_env)$ deactivate<br />
<br />
== Virtualenvwrapper ==<br />
<br />
''virtualenvwrapper'' allows more natural command line interaction with your virtualenvs by exposing several useful commands to create, activate and remove virtualenvs. Like ''virtualenv'', this package does not currently support Python 3.x.<br />
<br />
===Installation===<br />
[[pacman|Install]] the {{Pkg|python-virtualenvwrapper}} package from the [[Official Repositories|official repositories]]. If you have not installed {{Pkg|python-virtualenv}} yet, {{Pkg|python-virtualenvwrapper}} will be installed now as a dependency.<br />
<br />
Now add the following lines to your {{ic|~/.bashrc}}:<br />
export WORKON_HOME=~/.virtualenvs<br />
source /usr/bin/virtualenvwrapper.sh<br />
<br />
Re-open your console and create the {{ic|WORKON_HOME}} folder:<br />
$ mkdir $WORKON_HOME<br />
<br />
===Basic Usage===<br />
The main information source on virtualenvwrapper usage (and extension capability) is Doug Hellmann's [http://www.doughellmann.com/docs/virtualenvwrapper/ page].<br />
<br />
* Create the virtualenv:<br />
$ mkvirtualenv -p python2.7 --no-site-packages my_env<br />
* Activate the virtualenv:<br />
$ workon my_env<br />
* Install some package inside the virtualenv (say, Django):<br />
$ (my_env)$ pip install django<br />
* Do your things<br />
* Leave the virtualenv: <br />
(my_env)$ deactivate<br />
<br />
== See Also ==<br />
*[http://pypi.python.org/pypi/virtualenv virtualenv Pypi page]<br />
*[http://wiki.pylonshq.com/display/pylonscookbook/Using+a+Virtualenv+Sandbox Tutorial for virtualenv]<br />
*[http://www.doughellmann.com/docs/virtualenvwrapper/ virtualenvwrapper page at Doug Hellmann's]</div>Pklaus