https://wiki.archlinux.org/api.php?action=feedcontributions&user=Wolfdogg&feedformat=atomArchWiki - User contributions [en]2024-03-29T10:27:26ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=511513User talk:Wolfdogg2018-02-19T20:47:19Z<p>Wolfdogg: Undo revision 462296 by Wolfdogg (talk) Putting back old wiki stuff, needed some of it as a guide.</p>
<hr />
<div>= Samba from OpenLDAP on Arch = <br />
== samba specific setup ==<br />
<br />
download smbldap-tools from the aur https://aur.archlinux.org/packages/sm/smbldap-tools/smbldap-tools.tar.gz<br />
make pkg, get any dependencies you need to get this process completed<br />
<br />
= Jenkins =<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
<br />
=== create an automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
run it this way now<br />
$startjenkins<br />
<br />
= Automount Samba Shares on boot=<br />
Using systemd,<br />
make an entry in fstab for the network drive<br />
use credentials option<br />
use noauto option<br />
use nofail option<br />
use x-systemd.automount option<br />
set timeout<br />
<br />
= New ZFS Setup =<br />
[root@falcon wolfdogg]# zpool create -f -m /san san ata-ST2000DM001-9YN164_W1E07E0G ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332<br />
[root@falcon wolfdogg]# zpool list<br />
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT<br />
san 5.44T 604K 5.44T 0% 1.00x ONLINE -<br />
[root@falcon wolfdogg]# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 544K 5.35T 136K /san<br />
[root@falcon wolfdogg]# zpool status<br />
pool: san<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
errors: No known data errors<br />
<br />
= ZFS add to linear span by id =<br />
$ cd disk/by-id/<br />
[root@falcon by-id]$ l<br />
total 0<br />
drwxr-xr-x 2 root root 560 Oct 26 02:45 .<br />
drwxr-xr-x 8 root root 160 Oct 26 02:45 ..<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-1E6164_Z1E6EQ15 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part1 -> ../../sdc1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part9 -> ../../sdc9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K -> ../../sda<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sda1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sda2<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sda3<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sda4<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part5 -> ../../sda5<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 -> ../../sdb<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1 -> ../../sdb1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part9 -> ../../sdb9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 -> ../../sdd<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1 -> ../../sdd1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part9 -> ../../sdd9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c50045406de0 -> ../../sdc<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part1 -> ../../sdc1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part9 -> ../../sdc9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c500658cc2b2 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee25e8d29db -> ../../sdd<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part1 -> ../../sdd1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part9 -> ../../sdd9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee6034e422c -> ../../sdb<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part1 -> ../../sdb1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part9 -> ../../sdb9<br />
[root@falcon by-id]$ zpool status<br />
pool: san<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using 'zpool upgrade'. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(5) for details.<br />
scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
[root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -Pn<br />
would update 'san' to the following configuration:<br />
san<br />
/dev/disk/by-id/ata-ST2000DM001-9YN164_W1E07E0G-part1<br />
/dev/disk/by-id/ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1<br />
/dev/disk/by-id/ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1<br />
/dev/disk/by-id/wwn-0x5000c500658cc2b2<br />
[root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -P<br />
[root@falcon by-id]$ zpool status<br />
pool: san<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using 'zpool upgrade'. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(5) for details.<br />
scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
wwn-0x5000c500658cc2b2 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
<br />
<br />
= Git three stage web deployment =<br />
<br />
Im now using git to handle my web repos. <br />
<br />
Currently my server environment is not hosted live. The intention is to have a secure development environment containing the sandbox and stage environments. Once i fully explore this setup i will add the 3rd step to push stage to a live shared host using the same methods here. <br />
<br />
<br />
This set up uses /srv/http/sandbox for the development directory. If you insist on developing out of /home/<username>/public_html then you can either substitute all mention of sandbox in this wiki with it, or you can just turn your public_html folder into a symlink leading to /srv/http/sandbox after completing these exercises.<br />
<br />
<br />
The following features detail the environment we are setting up using this wiki:<br />
* Central location for git repositories (--separate-git-dir) /srv/http/repos/git<br />
* Development sandbox /srv/http/sandbox<br />
* Staging ground for final pre-live q/a testing /srv/http/stage<br />
<br />
<br />
Workflow:<br />
* clone your repo into your sandbox area, work on your files<br />
* commit changes from sandbox when each implementation or task is complete, then push those changes to the central sandbox repo (/srv/http/repo/git/website.com) where you will then be able to view them on dev(http://dev.website.com)<br />
* At some point, when ready to stage, a "push" from the central sandbox repo to a separate central stage repo (/srv/http/git/repo/website.com.stage) will be made <br />
* Hooks located in central stage repo will auto deploy to stage(/srv/http/stage/website) when push is ran . (This deploys only the web files, and NOT the hidden .git repo folder which would otherwise make your site vulnerable, hence the reason for this wiki. Otherwise it would be as easy as going to stage and cloning for example)<br />
<br />
== Apache adjustments ==<br />
set up a new system for dev and stage environments<br />
<br />
=== Edit vhost ===<br />
<br />
add or edit /etc/httpd/conf/extras/httpd-vhost.conf file to something similar<br />
<br />
# default route<br />
<VirtualHost *:80><br />
ServerName <hostname><br />
ServerAlias <hostname><br />
VirtualDocumentRoot "/srv/http"<br />
</VirtualHost><br />
<br />
#sandbox (dev) root route<br />
<Virtualhost *:80><br />
ServerName dev<br />
DocumentRoot "/srv/http/sandbox"<br />
</Virtualhost><br />
<br />
#sandbox (dev) subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName dev.sub.com<br />
ServerAlias dev.*<br />
VirtualDocumentRoot "/srv/http/sandbox/%2+"<br />
</VirtualHost><br />
<br />
#stage root route<br />
<Virtualhost *:80><br />
ServerName stage<br />
DocumentRoot "/srv/http/stage"<br />
</VirtualHost><br />
<br />
#stage subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName stage.sub.com<br />
ServerAlias stage.*<br />
VirtualDocumentRoot "/srv/http/stage/%2+"<br />
</VirtualHost><br />
<br />
restart apache <br />
systemctl restart httpd<br />
#or if your still using initscripts<br />
rc.d restart httpd<br />
<br />
=== Create dirs === <br />
<br />
touch /srv/http/index.php<br />
mkdir /srv/http/sandbox<br />
touch /srv/http/sandbox/index.php<br />
mkdir /srv/http/stage<br />
touch /srv/http/stage/index.php<br />
<br />
note, in order for this to work now you will need to add your websites with the same dir name as they will be when deployed, i.e. use dots when dots exists, e.g. if website is google.com the dirname must be google.com, so your dev site will end up being /srv/http/sandbox/google.com so that the vhost will be able to map to it using http://dev.google.com<br />
<br />
=== Edit hosts files ===<br />
<br />
Assuming your development machine is not the same as the server, you need to make sure to add an entry for each site, during creation of the site, to your hosts file. <br />
<br />
i will use server ip 192.168.1.99 and hostname 'myserver' as example, and google.com as our website url example, use your own host name's and website url's in place of them. <br />
<br />
* linux <br />
add the following lines to /etc/hosts<br />
192.168.1.99 <br />
192.168.1.99 dev myserver<br />
192.168.1.99 stage myserver <br />
192.168.1.99 dev.google.com myserver<br />
192.168.1.99 stage.google.com myserver<br />
<br />
* windows<br />
<br />
navigate to the following file logged in as admin, <br />
%WINDIR%/system32/drivers/etc/hosts <br />
<br />
or run cmd as administrator, <br />
paste thie following line there<br />
notepad %WINDIR%/system32/drivers/etc/hosts <br />
<br />
add your sites to the hosts file<br />
192.168.1.99 dev<br />
192.168.1.99 stage<br />
192.168.1.99 dev.google.com<br />
192.168.1.99 stage.google.com<br />
#dev.anothersite.com<br />
#stage.anothersite.com<br />
#etc...<br />
<br />
* mac, you get the idea...<br />
<br />
== Git usage ==<br />
Install git, and make the following changes<br />
<br />
=== Set up Git ===<br />
<br />
* Create a central location for the git repos. (note, ill use a group called 'webteam' for example. This group separates what apache needs permission to from what a web developer needs permission to. each web developer is a member of this group) <br />
$ su<br />
$ mkdir -p /srv/http/repos/git<br />
$ chown http:webteam /srv/http/* -R<br />
$ chmod 775 /srv/http/* -R<br />
<br />
*sandbox push global configs<br />
you will probably get an error stating that you cant push to the branch that you cloned out of, more research needs to be done on this, but it seems pretty straight forward. Set the following config variable to squelch this.<br />
$ git config --global receive.denyCurrentBranch warn #you can set it to false instead of warn once your sure it doesnt cause any problems<br />
<br />
* stage push global configs<br />
Git version 2.0+ will start using a new push.default standard called "simple". "simple" indicates that the push will refuse if the upstreams branch name is different than the local one, which in this case it is. (website.com vs. stage.website.com) If your version is less than 2.0, which to date isn't out yet, then we can make it forward compatible. Set the following config variable to squelch this.<br />
$ git config --global push.default matching<br />
<br />
=== Add or enroll a new site ===<br />
Once your Git environment is setup all you have to do is start from this point each time you want to create a new website. <br />
<br />
* add a website (or see below if you are enrolling an existing site)<br />
$ mkdir -p /srv/http/sandbox/<website.com> # -p incase you dont have a sandbox dir yet<br />
*alternatively enroll an existing website<br />
To enroll an existing website, move it to the /srv/http/sandbox directory, being sure to rename it to conform to the actual address that it will be accessible via live (naming is important here, e.g. if your folder is named website-com, mv it to website.com. The .com portion is not mandatory if there is no domain name intended for it, just know that you will be accessing it in your browser exactly the same way as you name it, unless you muck around with the vhosts file to customize your own mapping strategy. )<br />
$ mv /home/<username>/public_html/website-com /srv/http/sandbox/website.com<br />
<br />
*continue here once you relocated your website or added a new website to the sandbox<br />
$ cd /srv/http/sandbox/website.com<br />
$ git init --separate-git-dir=/srv/http/repos/git/<website><br />
$ git add -A<br />
$ git commit -m 'comment here'<br />
<br />
If your going to actively develop this site, then continue below to stagify the site. If your just archiving an old site for later development you can stop here, delete the site folder ( e.g./srv/http/sandbox/website.com) then "recreate the worktree", as detailed below, from the repo and stageify at a later time.<br />
<br />
=== Stageify ===<br />
* create empty stage repo for this site (start here if you already have an existing site and repo substitute sandbox with your site location)<br />
$ mkdir -p /srv/http/repos/git/website.com.stage<br />
$ cd /srv/http/repos/git/website.com.stage<br />
$ git init --bare<br />
<br />
* make sites stage dir and create the hook (while still in the stage repo)<br />
$ mkdir /srv/http/stage/website.com<br />
$ cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/srv/http/stage/website.com git checkout -f<br />
<br />
#press ctrl +d to save and exit cat<br />
<br />
$ chmod +x hooks/post-receive<br />
<br />
* Define stage mirror and create a master branch tree (you MUST run both the following commands from the sandbox repo, the sandbox checkout NOR the stage repo will suffice)<br />
$ cd /srv/http/repos/git/website.com #note, important that your in this exact path, as opposed to website.com.stage (if your sandbox .git dir is somewhere else then cd INTO it)<br />
$ git remote add stage.website.com /srv/http/repos/git/website.com.stage #(note, stage.website.com is just a name, its what we will use to declare as teh repo name that we will call when we stage it)<br />
$ git push stage.website.com +master:refs/heads/master<br />
<br />
* the push to stage will be run simply from here forward, from within your root code base.. (i.e. you dont need to be in the .git dir anymore)<br />
$ git push stage.website.com<br />
See workflow below for more examples on this.<br />
<br />
* In case of upstream branch error when you run the push, you can do the following.<br />
Since we have globally configured the push.default to matching, an upstream branch error may occur. To squelch this error we need to indicate our upstream branch. Since we named out branch "stage.website.com" we issue the following command from our sandbox repo (/srv/http/repos/git/website.com)<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push --set-upstream stage.website.com master<br />
<br />
== General work flow ==<br />
<br />
Once you have your environment all setup, you can follow a general work flow pattern in order to take advantage of the full functionality.<br />
<br />
=== New development process ===<br />
<br />
* If you plan to just archive your site, you may want to delete it from the sandbox once its archived to the git repository using the steps above, since its nicely compressed and packed away into your repo dir. If this is the case then go head and delete your sandbox web folder at this point so we can begin the flow from the furthest possible point. (note, only do this step if you have your website commited to the git repo as outlined above, i.e. your .git folder is NOT located inside your code base dir), '''otherwise continue "Work on files" below'''.<br />
$ cd /srv/http<br />
$ rm website.com -rf<br />
<br />
You might want to delete your worktree after putting a project on the back-burner, and are not planning on working on it for long periods of time. Of you may have a crowded project directory and want to get your sanity back. If you deleted your worktree previously, or are restoring an old site from a repo for whatever reason, and your project is stored only inside the compressed repo, then you will need to re-create the worktree from the repo before working on your files. The following methods will guide you through doing this. <br />
<br />
==== recreate worktree ====<br />
<br />
Note, you wont need to do these steps if you still have your worktree in place and never deleted it. <br />
<br />
===== method 1 ===== <br />
(preferred method)<br />
<br />
One way to get the worktree back is to extract the worktree out of the repo, then attach it to the repos branch<br />
* make the new site folder inside your sandbox<br />
$ mkdir /srv/http/sandbox/website.com<br />
<br />
* switch to the new worktree directory and ''init'' for the first time<br />
$ cd /srv/http/sandbox/website.com<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
alternatively you can run those last two commands on one line if its easier for you<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init && echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
<br />
* check the branch your on before pulling down the files<br />
$ git branch<br />
<br />
* switch to the repo and extract the files to the new site directory you just created. Make sure your checking out the branch you intend, normally that will be "master" unless you have branched the repo. Replace the word "master" in the following command if you are working on a different branch.<br />
$ cd /srv/http/repos/git/website.com<br />
$ git archive master | tar -x -C /srv/http/sandbox/website.com<br />
note, if you get an error in the above step <br />
could not switch to '/some/dir'': No Such File or directory<br />
then you may be trying to get the files back to a directory other than what was used when the repo was made, ''otherwise continue below''. <br />
<br />
If you did receive this error then you need to edit the config file in the repo to point to the new directory. To do this, open the ./config file in your favorite editor and edit the worktree path to point to the new site directory that you created.<br />
<br />
===== method 2 (alt method) ===== <br />
Another way to get the worktree back is to clone it out from the main repo, then delete the .git dir, then create the .git symlink. This way just seems a bit messy because one needs to delete the .git dir after unecessarily creating it, but as long as you replace it with the proper symlink it works fine. <br />
<br />
* clone the site to sandbox<br />
$ cd /srv/http/sandbox<br />
$ git clone /srv/http/repos/git/website.com<br />
* delete the hidden .git folder that was created inside this worktree<br />
$ rm -rf .git<br />
* recreate the .git symlink <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
* checkout master to get things back on track (is this step even necessary?)<br />
$ git checkout master<br />
<br />
==== Work on files ====<br />
<br />
* Work on files<br />
$ cd website.com <br />
$ git status<br />
edit your files.....<br />
<br />
==== commit the changes ====<br />
<br />
$ git status #notice files are not staged for commit<br />
$ git add -A # or just individual files (e.g. git add file1.php file2.php) etc..<br />
$ git status # make sure the files you want to commit are listed to be committed<br />
$ git commit -m 'this is what i did to these files'<br />
$ git status # should come back clean<br />
$ git log # you can see your commit comments here<br />
<br />
visit http://dev.website.com to test the new functionality that was just committed<br />
<br />
If all looks well, you need to push your changes to the sandbox branch master(repo) that you cloned from. This accomplishes two things<br />
<br />
* liberates your sandbox clone to be deleted at will<br />
* updates branch master so the files are waiting to be pushed(deployed) to stage<br />
$ git push<br />
<br />
=== Deploy to stage ===<br />
Once your separate tasks have been individually commited, and you have thoroughly tested then in the sandbox environment, you can now deploy to the stage environment for final q/a before going live. <br />
<br />
*deploy to stage<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push stage.website.com<br />
<br />
now visit http://stage.website.com to do your final q/a<br />
<br />
If any mistakes are spotted on stage, i.e. if everything is not working as expected, then your stage environment just paid for itself. Follow the below steps for damage control to revert changes back on stage <br />
<br />
=== Damage control ===<br />
<br />
If mistakes are spotted on stage, we need to revert our changes<br />
<br />
* @todo: commands for reverting<br />
* @todo: workflow<br />
<br />
<br />
now that stage has been reset to the last-known-good, go back to the sandbox and edit<br />
<br />
== Why separate repository from web folder? ==<br />
<br />
For me, it just makes sense. The beauty of it is you can bulk delete your all or any of your web files in the sandbox area when they start to get in the way. When your ready to work on that site again, you can just "recreate the worktree".<br />
<br />
It may also help to have the repos outside of the web folders if you run any type of incremental backups, e.g. rsnapshot, or rsync. You can choose to backup only the repos, which are compressed, or to backup everything except sandbox for example. Mainly the fact that you can delete your sandbox websites without losing the repo is the main reason.<br />
<br />
== Credits ==<br />
Thanks to <br />
* Abhijit Menon-Sen at http://toroid.org/ams/git-website-howto which is where all my google searches finally landed me and payed-off as to what i was trying to accomplish.<br />
<br />
* niks http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked for getting <br />
and<br />
Charles Bailey http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked <br />
for getting me over the hump of getting the worktree properly out of a repo archive<br />
<br />
= ZFS-FUSE Implementation =<br />
<br />
Im trying to utilize the ZFS filesystem to create a flexible array of different sized drives that will serve as a data-backup volume shared across a samba network. ZFS can be taken advantage of here because of its flexibility using storage pools. ZFS also has some of the best features catering to data integrity. One drawback is some of this is done at the expense of file transfer rates. There are some things you can do to offset this however by having small backup drives, even usb stick or just about any other type of media serving as a cache drive. Speed will not be a consideration on this array since its main role is data integrity and safety. <br />
<br />
The test array initially used here was an array of 3 dives, one 2Tb, and 2 500GB's. LVM was also explored to find ways to span drives into one large volume. <br />
<br />
== Installation ==<br />
<br />
1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory. <br />
<br />
went to AUR, copy and pasted the PKGBUILD contents into a new PKGBUILD file in my packages directory /home/wolfdogg/packages/packages/yaourt/PKGBUILD<br />
ran the makepkg -s from that folder. after battling a successtion of errors (invalid signatures in pacman, needed to re-init pacman-key, then had to create new group called sudo, and add my user to it, then edit the /etc/sudoers file accordingly, which seemed like the best way to get my user into sudoers, and eliminate the risk of accidentally doing something from root during the makepkg process)<br />
and finally, pacman -U yaourt<br />
<br />
2)Then i followed the steps on the wiki on https://wiki.archlinux.org/index.php/ZFS_on_FUSE . For myself, the NFS portion got a bit confusing here https://wiki.archlinux.org/index.php/ZFS_on_FUSE#NFS_shares . Here in this section of the wiki [code]zfs set sharenfs=[/code] i got a bit side-tracked and will come back to it at a later time. <br />
<br />
I found the manual for Solaris ZFS here [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/] and it looks like it will provide all the info needed, especially this section of it [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s03.html]<br />
<br />
Now would be a good time to read the manuals<br />
#man zpool<br />
#man zfs<br />
<br />
== Inventory drives ==<br />
<br />
*Hook up any drives that you want to add to your new filesystem. <br />
<br />
*Take an inventory of your current drives and partitios. Note any existing arrays and partitions. See md0 (mdadm array), zfs-kstat/pool (a zfs pool), and /pool/backup (a dataset) in the example below.<br />
<br />
*View the drive information<br />
# blkid -o list -c /dev/null<br />
<br />
*List the partitions, and inspect your mounts<br />
# lsblk -f<br />
<br />
sdb 8:16 0 1.8T 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
sdc 8:32 0 465.8G 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
<br />
*Use one of the following for each disk if you want to view partition tables and or reformat your drives:<br />
*If you have a MBR partition table <br />
#fdisk -l<br />
*If you have GPT partition table<br />
#gdisk -l /dev/sdb<br />
<br />
*View the mounts<br />
# findmnt<br />
<br />
TARGET SOURCE FSTYPE OPTIONS<br />
/zfs-kstat kstat fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other<br />
└─/pool pool fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
└─/pool/backup pool/backup fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
<br />
== Prepare Drives ==<br />
If you already know what your doing, you can use cgdisk, here is a nice link for cgdisk reference [http://www.rodsbooks.com/gdisk/cgdisk-walkthrough.html]. Otherwise, follow below to see a gdisk example walkthrough.<br />
# gdisk<br />
Type device or filename<br />
We will use /dev/sdb for this example<br />
# /dev/sdb<br />
Command:<br />
o will create a new partition table<br />
# o<br />
Proceed? <br />
This option will delete all partition and create a new protective MBR<br />
# y<br />
Command:<br />
n will create a new partition<br />
# n<br />
Partition number<br />
We should use 1 if its the first<br />
# 1<br />
Press enter for default first sector<br />
Press enter for default last sector<br />
Hexcode: <br />
# L<br />
L Shows all codes, choose a filesysetm hex code and type it in<br />
We will go with bf00 (Solaris root) for this example<br />
# bf00<br />
Now lets write partition to disk<br />
COmmand:<br />
# w<br />
Final checks complete<br />
Do you want to proceed<br />
# y<br />
<br />
Now your disk has a new clean partition table<br />
<br />
Here are some steps to follow, so far its all i have.<br />
<br />
=== Create the Zpool ===<br />
<br />
Create the pool (notice im not using raidz here, you may want to if your drives are all the same size)<br />
# zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd /dev/sde<br />
*Also note, you should create two pools if you have 4 or more drives so that the redundancy portion that zfs is so well known for will work properly.<br />
<br />
"pool" is the name of the current test pool, and the mount point.<br />
<br />
'''RAIDZ1, RAIDZ2'''<br />
<br />
*To create a RAIDZ if you have atleast 3 drives of the same size <br />
*'raidz' is an alias for 'raidz1', or you can use 'raidz2' if you have an extra drive for an extra stripe for extra redundancy.)<br />
# zpool create pool raidz /dev/sdb /dev/sdc /dev/sdd<br />
<br />
'''Alternatively, create a raid span.''' Note, no raid type has been specified (disk, file, raidz, etc...)<br />
# zpool create pool /dev/sdb /dev/sdc /dev/sdd<br />
<br />
(see below to create an mdadm linear span instead (jbod)) <br />
<br />
if you do some testing and get stuck with a drive reporting unavailable after you hook it back up, sometimes i have to reboot which i probably just don't know enough about yet, but some commands that have been helpful<br />
<br />
Get the list and size of the pool<br />
# zpool list<br />
<br />
To get the status<br />
#zpool status<br />
<br />
=== Create the ZFS Filesystem Hierarchy (dataset) ===<br />
<br />
Lets create a dataset <br />
# zfs create pool/backup<br />
# zfs set mountpoint=/backup pool/backup<br />
<br />
you can create a child file system inside the parent, the child file system is would automatically be inherited <br />
# zfs create pool/backup/<computer_name><br />
<br />
Per the manual, ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /dev/vfstab)<br />
<br />
=== List and inspect ===<br />
<br />
List and inspect your new zfs file system<br />
# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
pool 106K 2.68T 21K /pool<br />
pool/backup 21K 2.68T 21K /pool/backup<br />
<br />
Now lets look at all drives to see what this looks like. View mount status and disk size<br />
# mount -l<br />
<br />
# df<br />
Filesystem 1K-blocks Used Available Use% Mounted on<br />
rootfs 15087420 2356784 11962328 17% /<br />
dev 1991168 0 1991168 0% /dev<br />
run 2027072 316 2026756 1% /run<br />
/dev/sda3 15087420 2356784 11962328 17% /<br />
shm 2027072 0 2027072 0% /dev/shm<br />
tmpfs 2027072 28 2027044 1% /tmp<br />
/dev/sda4 95953460 84172112 6904016 93% /home<br />
/dev/sda1 99550 19445 74886 21% /boot<br />
pool 2873622443 23 2873622420 1% /pool<br />
pool/backup 2873622441 21 2873622420 1% /pool/backup<br />
<br />
To learn more about your coinfiguration options run the following command<br />
# zfs get all | less<br />
<br />
=== Destroy Array and Pools ===<br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
If you need to break down your zpool or zfs datasets follow below. <br />
<br />
Deactivate the array using mdadm RAID manager (unmount)<br />
# mdadm -S /dev/md0<br />
<br />
I chose to delete the line in /etc/mdadm.conf as well. not sure if it was needed but it seemed that there was still bits and pieces lying around that needed removing<br />
# vim /etc/mdadm.conf<br />
<br />
To destroy a dataset in the pool <br />
#zfs destroy <filesystemvolume><br />
<br />
Or you can destroy the entire pool<br />
#zpool destroy <pool><br />
<br />
If you cant totaly destroy the pool, or are trying to create a new pool with the same name its possible to trace clues about what process is using it using <br />
#fuser /pool -a <br />
<br />
Then run top to find that process PID<br />
#top<br />
<br />
Or run lsof<br />
# lsof | zfs-fuse | less<br />
Go from there....<br />
<br />
Now move on to [[#Linear_RAID_.28jbod.29_filesystem_using_mdadm_.2F_lvm]] or [[#RAIDZ_Filesystem_Configuration]]<br />
<br />
== Linear RAID (jbod) filesystem using mdadm / lvm ==<br />
<br />
I decided to explore other options until i can figure out wether the ZFS span is set up properly using ZFS. <br />
<br />
=== Create the array ===<br />
<br />
Destroy existing zfs pool and break down the array where applicable. use this as a guide <br />
[[#Destroy_Array_and_Pools]]<br />
<br />
Use mdadm to create the span<br />
# mdadm --create /dev/md0 --level=linear --raid-devices=3 /dev/sd[bcd]<br />
<br />
Get the status detail using mdadm <br />
<br />
# mdadm --misc --detail /dev/md0<br />
/dev/md0:<br />
Version : 1.2<br />
Creation Time : Mon Jun 25 13:52:31 2012<br />
Raid Level : linear<br />
Array Size : 2930286671 (2794.54 GiB 3000.61 GB)<br />
Raid Devices : 3<br />
Total Devices : 3<br />
Persistence : Superblock is persistent<br />
Update Time : Mon Jun 25 13:52:31 2012<br />
State : clean<br />
Active Devices : 3<br />
Working Devices : 3<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
Rounding : 0K<br />
Name : falcon:0 (local to host falcon)<br />
UUID : 141f34d0:2b2c0973:4a6f070b:17b772ec<br />
Events : 0<br />
Number Major Minor RaidDevice State<br />
0 8 16 0 active sync /dev/sdb<br />
1 8 32 1 active sync /dev/sdc<br />
2 8 48 2 active sync /dev/sdd<br />
<br />
=== Create volume and groups ===<br />
<br />
Create the physical volume<br />
# pvcreate /dev/md0<br />
<br />
Display the volume<br />
# pvdisplay<br />
"/dev/md0" is a new physical volume of "2.73 TiB"<br />
--- NEW Physical volume ---<br />
PV Name /dev/md0<br />
VG Name<br />
PV Size 2.73 TiB<br />
Allocatable NO<br />
PE Size 0<br />
Total PE 0<br />
Free PE 0<br />
Allocated PE 0<br />
PV UUID 1Hr3ay-L0mZ-33GD-4ZeM-EOtW-lkAz-M4REMJ<br />
<br />
Create the volume group<br />
# vgcreate VolGroupArray /dev/md0<br />
Volume group "VolGroupArray" successfully created<br />
<br />
Display the volume groups<br />
# vgdisplay<br />
--- Volume group ---<br />
VG Name VolGroupArray<br />
System ID<br />
Format lvm2<br />
Metadata Areas 1<br />
Metadata Sequence No 1<br />
VG Access read/write<br />
VG Status resizable<br />
MAX LV 0<br />
Cur LV 0<br />
Open LV 0<br />
Max PV 0<br />
Cur PV 1<br />
Act PV 1<br />
VG Size 2.73 TiB<br />
PE Size 4.00 MiB<br />
Total PE 715401<br />
Alloc PE / Size 0 / 0<br />
Free PE / Size 715401 / 2.73 TiB<br />
VG UUID 2OAWpT-fO50-A7cW-jUjd-meQh-sWQa-7b55ZO<br />
<br />
Create the logical volume, i used 2.725 because 2.73 failed<br />
# lvcreate VolGroupArray -L 2.725T -n backup<br />
<br />
Display volume<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
Check the status <br />
# cat /proc/mdstat<br />
Personalities : [linear]<br />
md0 : active linear sdd[2] sdc[1] sdb[0]<br />
2930286671 blocks super 1.2 0k rounding<br />
unused devices: <none><br />
<br />
''@TODO Need advice here - this portion needs testing, i got stuck here last time it tried this.''<br />
<br />
== Recreating a ZFS storage pool ==<br />
<br />
From my understanding one can add a drive, to a vdev (Virtual Device, or array set) without losing data, if its a linear span, but you cant remove a drive from the vdev without first copying the files and then deleting the pool then recreating it. This portion will walk you through completely tearing down the array, freeing up the drives, and recreating either a mdadm jbod linear span or a RAIDZ. <br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
=== List the pools ===<br />
<br />
get a list of the current pools<br />
#zpool list<br />
<br />
check status <br />
# zpool status<br />
<br />
list datasets<br />
#zfs list<br />
<br />
== Share mount to network using samba ==<br />
<br />
Configure samba to give user access to the share (good for network backups from any operating system, including windows backup) <br />
*add the following entry to /etc/samba/smb.conf and restart the samba daemon<br />
[backup]<br />
comment = backup drive<br />
path = /pool/backup<br />
valid users = user1,user2<br />
read only = No<br />
create mask = 0765<br />
wide links = Yes<br />
<br />
#rc.d restart samba<br />
<br />
modify the permissions on pool/backup so they are accesible over the network, i decided to add 'user' group accessibility<br />
# chown root:users backup<br />
# chmod g+w backup<br />
<br />
Alternatively<br />
# chown root:root backup/<br />
# chmod 755 backup/<br />
# chown root:users backup/<dataset1><br />
# chmod 775 backup/<dataset1><br />
<br />
== ZFS RAID Maintenance ==<br />
<br />
@TODO - this section is not finished<br />
<br />
==== Maintenance ====<br />
<br />
*To place a disk back online (see manual for this)<br />
#zpool online<br />
<br />
*To replace a disk (see manual for this)<br />
#zpool replace<br />
<br />
* If your pool goes down, i.e. one of your drives goes offline and not enough drives to complete replication you can try the following<br />
- Check if the drive is initiated, you should see it in the list thats returned from running the following command<br />
# blkid<br />
If its not in the list:<br />
- Check all cable connections, the drive may not be mounted. A reboot may be needed. <br />
- Once you get the drive to appear then run following commands.<br />
# zpool export <pool><br />
# zpool import <pool><br />
# zpool list<br />
If you get an error message <br />
"cannot import 'pool': one of more devices is currently unavailable. Destroy and re-create the pool from a backup source" try to export then import again.<br />
@todo more information is needed before suggestions are made at this point.<br />
<br />
== Help Needed ==<br />
<br />
If anybody has any suggestions, please chime in. <br />
<br />
I havent figured out how to adress the size of the array yet. For example, when i hook up only the 2TB with the 500GB as a raidz, it will let me, and the size reports 1.36TB. When i destroyed the pool and rebuilt using all 3 drives (2TB,500GB,500GB) the size still reports 1.36TB<br />
<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 00:18, 25 June 2012 (UTC)<br />
<br />
= volnoti on KDE using alsamixer =<br />
<br />
The only program that worked for a HP laptop to control the volume correctly on KDE.<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.<br />
<br />
= smbclient media stream issues using dolphin when accessing windows shares =<br />
<br />
== description ==<br />
I was having access problems when trying to access files via smbclient, or samba as a client, accessing shares on a win7 file server, '''only''' when the user exists on the windows machine. It turns out the problem is possibly either a complicated mounting issue, or a deep bug in dolphin. <br />
<br />
Before i delve into this, i also wanted to mention, the windows machine was set to not share files as "password protected sharing" http://www.sevenforums.com/tutorials/185429-password-protected-sharing-turn-off-windows-7-a.html. If you DO share with windows password protected sharing, when you acccess the samba shares through dolphin, it will issue a popup asking you for a password, even if you put in valid credentials you will still experience the issue, so dont spend too much time debugging by turning on and off password protected sharing as it didnt seem to help either way.<br />
<br />
== the bug reproduced ==<br />
The way i was accessing the shares was through dolphin, by clicking on on the "Network" places, then by clicking into the "Samba" symlink. There i would see a list of workgroups, click into those to access my files, and so forth. I would find a directory i wanted to add to My "Places", i would right click on it and add to my places. <br />
<br />
When i would access these media files through either the symlink i created in my Places, or through the existing Network symlink in my places, once i navigated to a video file, .avi in this case, no matter if i would choose VLC, or mplayer, the system would need to cache in full before it would play. This meant instead of the video starting right away, i would have to wait sometimes 10-15 minutes before the video would start, or i would get an error(VLC is unable to open the MRL 'smb://<server>/UserFiles/PublicArchive/movie.avi), depending on how the windows password protect was set, or if i was using vlc vs. mplayer, etc... Obviously something was wrong. Now i'm sure these symptoms ran much deeper than just video files, for example, i remember it happening to audio files, but i think it will probably happen to even text files or any file thats notible large enough to take more than 3 seconds to access across a network. <br />
<br />
Important note, this only happens when im logged into KDE as a user that already exists on the windows system im trying to access the publicly shared files. If i were to log in to KDE as a user that doesn't exist the files would not have to cache, but instead would immediately start to stream as expected. <br />
<br />
Something really buggy is going on at this point. So i thought maybe i would go into the KDE system settings > sharing and set the default username and password but this didn't help. i tried several things, including re-installing network driver, re-installing kde over the top of itself, userdel the user, clean out the home directory, nothing worked. <br />
<br />
So once again, as i have done in the past to try to solve this issue, i decided i would try to manually mount the thing again and gain access differently that using the network icon in dolphin. This time i was following the wiki as usual, and i got to the part about manually mounting shares where i stumbled on one line that mentioned the /mnt directory that i have seen so many times before. This time the cards were lined up right i guess because i decided to click through "Root" ((in places) using dolphin, then navigated my way through /mnt/smbnet and onto my files this way where i discovered they play no problem (doesnt need to cache, starts streaming immediately).<br />
<br />
== the fix ==<br />
It appears that the symlink "Network" in Dolphin 'Places' bar, at least the way its currently set up in my version of kde4, is there to make life miserable. Dont use it if you dont want to have to fully download the file before your system will have access to it. Don't access your network shares this way if they are on a samba share. Instead, navigate through Dolphins "Root" /mnt/smbnet/<your-workgroup>/<your-server>/<your-fileshares> instead, and right click on one of those folders and "Add To Places", then you will have proper a proper symlink on your "Places" to access through /mnt.<br />
<br />
= steps taken to repair Intel IbexPeak HDMI / IDT 92HD81B1X5 internal mic =<br />
<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=F1734<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#2 | grep Codec<br />
cat: /proc/asound/card0/codec#2: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#1 | grep Codec<br />
cat: /proc/asound/card0/codec#1: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec<br />
cat: /proc/asound/card0/codec: No such file or directory<br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card0/<br />
[root@osprey card0]# ll<br />
total 0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#3<br />
-rw-r--r-- 1 root root 0 Apr 6 00:49 eld#3.0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 id<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0c<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0p<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm3p<br />
[root@osprey card0]# cat /proc/asound/card0/codec#0 | grep Codec<br />
Codec: IDT 92HD81B1X5<br />
[root@osprey card0]# cat /proc/asound/card0/codec#3 | grep Codec<br />
Codec: Intel IbexPeak HDMI<br />
[root@osprey card0]# cat /proc/asound/card0/eld#3.0 | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/id | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# pacman -S gstreamer0.10-plugins<br />
:: There are 5 members in group gstreamer0.10-plugins:<br />
:: Repository extra<br />
1) gstreamer0.10-bad-plugins 2) gstreamer0.10-base-plugins 3) gstreamer0.10-ffmpeg<br />
4) gstreamer0.10-good-plugins 5) gstreamer0.10-ugly-plugins <br />
<br />
Enter a selection (default=all): <br />
warning: gstreamer0.10-bad-plugins-0.10.23-3 is up to date -- reinstalling<br />
warning: gstreamer0.10-base-plugins-0.10.36-1 is up to date -- reinstalling<br />
warning: gstreamer0.10-ffmpeg-0.10.13-1 is up to date -- reinstalling<br />
resolving dependencies...<br />
looking for inter-conflicts...<br />
<br />
Targets (10): gstreamer0.10-ugly-0.10.19-5 libavc1394-0.5.4-1 libiec61883-1.2.0-3<br />
libsidplay-1.36.59-5 wavpack-4.60.1-2 gstreamer0.10-bad-plugins-0.10.23-3<br />
gstreamer0.10-base-plugins-0.10.36-1 gstreamer0.10-ffmpeg-0.10.13-1<br />
gstreamer0.10-good-plugins-0.10.31-1 gstreamer0.10-ugly-plugins-0.10.19-5<br />
<br />
Total Download Size: 1.00 MiB<br />
Total Installed Size: 13.08 MiB<br />
Net Upgrade Size: 3.63 MiB<br />
<br />
Proceed with installation? [Y/n] <br />
:: Retrieving packages from extra...<br />
gstreamer0.10-base-plugi... 165.3 KiB 963K/s 00:00 [#############################] 100%<br />
libavc1394-0.5.4-1-x86_64 32.0 KiB 759K/s 00:00 [#############################] 100%<br />
libiec61883-1.2.0-3-x86_64 37.3 KiB 829K/s 00:00 [#############################] 100%<br />
wavpack-4.60.1-2-x86_64 113.7 KiB 921K/s 00:00 [#############################] 100%<br />
gstreamer0.10-good-plugi... 327.3 KiB 1124K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-0.10.... 160.4 KiB 908K/s 00:00 [#############################] 100%<br />
libsidplay-1.36.59-5-x86_64 107.8 KiB 771K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-plugi... 84.4 KiB 727K/s 00:00 [#############################] 100%<br />
(10/10) checking package integrity [#############################] 100%<br />
(10/10) loading package files [#############################] 100%<br />
(10/10) checking for file conflicts [#############################] 100%<br />
(10/10) checking available disk space [#############################] 100%<br />
( 1/10) upgrading gstreamer0.10-bad-plugins [#############################] 100%<br />
( 2/10) upgrading gstreamer0.10-base-plugins [#############################] 100%<br />
( 3/10) upgrading gstreamer0.10-ffmpeg [#############################] 100%<br />
( 4/10) installing libavc1394 [#############################] 100%<br />
( 5/10) installing libiec61883 [#############################] 100%<br />
( 6/10) installing wavpack [#############################] 100%<br />
( 7/10) installing gstreamer0.10-good-plugins [#############################] 100%<br />
<br />
(gconftool-2:5520): GConf-WARNING **: Client failed to connect to the D-BUS daemon:<br />
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. <br />
( 8/10) installing gstreamer0.10-ugly [#############################] 100%<br />
( 9/10) installing libsidplay [#############################] 100%<br />
(10/10) installing gstreamer0.10-ugly-plugins [#############################] 100%<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
<br />
= Git Remote Development =<br />
<br />
==Configure remote==<br />
===Set up remote repos===<br />
<br />
*Set up the bare repo and public served repo (for websites)<br />
<br />
$ ssh user@remote<br />
# mkdir /home/user/git/site.com.git (or /var/www/git/site.com etc.. if not shared hosting.)<br />
# cd /home/user/git/site.com.git<br />
# git init --bare --shared (or not shared)<br />
<br />
*Choose only option a or option b in the following steps<br />
<br />
*Now chooose one of the following, either a or b, dont do them both. <br />
**a) If you want to always edit files locally and not have to do a pull from served location, which pretty much automates the file uploads upon push. <br />
**b) If you want the ability to also edit files directly on the served location as well then you might want to keep them in their own git repository, therefore you will be cloning the bare repo out to the sites served directory. Each time you do a push from your local, you will then need to do a pull from this remote cloned repo after each commit before you will see your changes live. <br />
<br />
===a) Post-receive hooks for automation===<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server''' <br />
<br />
*Make post-receive hook in bare repo which will ship files automatically into its served location<br />
<br />
# cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/home/soldiert/public_html/project.com/ git checkout -f <br />
// (press ctrl_d to exit and save)<br />
<br />
# chmod +x hooks/post-receive (same path as what you put in hooks/post-receive)<br />
# mkdir /home/user/git/site.com<br />
# exit<br />
<br />
===b) Clone for more control====<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server'''<br />
<br />
*Clone remote bare into directory that it will be served out of to gain more control over editing from both locations. <br />
<br />
# cd /home/user/public_html<br />
# clone /home/user/git/site.com.git <br />
# exit<br />
<br />
=== Start tracking on local===<br />
*Start tracking on local if you haven't already<br />
$ cd ~/projects/site.com<br />
# git --init<br />
<br />
===Now set up connection to remote origin on your local git repo===<br />
$ cd ~/projects/site.com<br />
$ git remote origin -v <br />
<br />
*Add your bare repo as the new origin. <br />
*Note, if you your origin is already in tact then skip the remove and add origin steps below<br />
$ git remote rm origin (if its still attached to git hub or someplace else that you dont want the files going)<br />
$ git remote add origin user@remote:/home/user/public_html/site.com/.git<br />
<br />
*Push your files<br />
$ git push origin master (will push to bare repo, then hook will check it out to served location. <br />
<br />
===Ready to develop on local=== <br />
$ cd ~/projects/site.com<br />
$ vim index.html<br />
$ git add -A<br />
$ git commit -am 'first commit message'<br />
$ git status<br />
$ git log<br />
<br />
===After commiting run the following===<br />
$ git push<br />
<br />
*Your'e done. <br />
<br />
*If you chose option b then you now you need to run the following commands<br />
$ ssh user@remote<br />
# cd /home/user/public_html/site.com<br />
# git pull<br />
*Your'e done.<br />
<br />
= PHP X-Debug=<br />
<br />
This topic covers installing X-Debug on a LAMP server.<br />
<br />
== Installation ==<br />
pacman -S xdebug<br />
== Configuration ==<br />
* add the following line to your php.ini on the bottom of the extensions list<br />
zend_extension="/lib64/php/modules/xdebug.so"<br />
* or use the non 64 bit one if needed<br />
zend_extension="/lib/php/modules/xdebug.so"<br />
* restart your server<br />
systemctl restart httpd<br />
* ensure x-debug is enabled in phpinfo<br />
php -i | grep xdebug | less<br />
<br />
== PHPStorm usage ==<br />
This step details configuring xdebug for use in development on a development machine, separate from the LAMP server, which has PHP-Storm installed. <br />
== References ==<br />
<br />
xdebug docs<br />
http://xdebug.org/docs/<br />
<br />
xdebug checker http://xdebug.org/find-binary.php<br />
<br />
troubleshooting info<br />
http://stackoverflow.com/questions/20752260/trouble-setting-up-and-debugging-php-storm-project-from-existing-files-in-mounte<br />
<br />
phpstorm zero configuration<br />
http://blog.jetbrains.com/phpstorm/2013/07/webinar-recording-debugging-php-with-phpstorm/<br />
<br />
phpstorm configurations<br />
https://www.jetbrains.com/phpstorm/webhelp/configuring-xdebug.html</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=462297User talk:Wolfdogg2017-01-11T02:18:12Z<p>Wolfdogg: /* volnoti on KDE using alsamixer */</p>
<hr />
<div>= volnoti on KDE using alsamixer =<br />
<br />
The only program that worked for a HP laptop to control the volume correctly on KDE.<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=462296User talk:Wolfdogg2017-01-11T02:17:13Z<p>Wolfdogg: </p>
<hr />
<div>= volnoti on KDE using alsamixer =<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=455223User talk:Wolfdogg2016-10-26T09:52:49Z<p>Wolfdogg: added instructions for adding by id, for zfs linear. not really documented anywhere i can find on the internet, yet.</p>
<hr />
<div>= Samba from OpenLDAP on Arch = <br />
== samba specific setup ==<br />
<br />
download smbldap-tools from the aur https://aur.archlinux.org/packages/sm/smbldap-tools/smbldap-tools.tar.gz<br />
make pkg, get any dependencies you need to get this process completed<br />
<br />
= Jenkins =<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
<br />
=== create an automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
run it this way now<br />
$startjenkins<br />
<br />
= Automount Samba Shares on boot=<br />
Using systemd,<br />
make an entry in fstab for the network drive<br />
use credentials option<br />
use noauto option<br />
use nofail option<br />
use x-systemd.automount option<br />
set timeout<br />
<br />
= New ZFS Setup =<br />
[root@falcon wolfdogg]# zpool create -f -m /san san ata-ST2000DM001-9YN164_W1E07E0G ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332<br />
[root@falcon wolfdogg]# zpool list<br />
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT<br />
san 5.44T 604K 5.44T 0% 1.00x ONLINE -<br />
[root@falcon wolfdogg]# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 544K 5.35T 136K /san<br />
[root@falcon wolfdogg]# zpool status<br />
pool: san<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
errors: No known data errors<br />
<br />
= ZFS add to linear span by id =<br />
$ cd disk/by-id/<br />
[root@falcon by-id]$ l<br />
total 0<br />
drwxr-xr-x 2 root root 560 Oct 26 02:45 .<br />
drwxr-xr-x 8 root root 160 Oct 26 02:45 ..<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-1E6164_Z1E6EQ15 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part1 -> ../../sdc1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part9 -> ../../sdc9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K -> ../../sda<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sda1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sda2<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sda3<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sda4<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part5 -> ../../sda5<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 -> ../../sdb<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1 -> ../../sdb1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part9 -> ../../sdb9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 -> ../../sdd<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1 -> ../../sdd1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part9 -> ../../sdd9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c50045406de0 -> ../../sdc<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part1 -> ../../sdc1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part9 -> ../../sdc9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c500658cc2b2 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee25e8d29db -> ../../sdd<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part1 -> ../../sdd1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part9 -> ../../sdd9<br />
lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee6034e422c -> ../../sdb<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part1 -> ../../sdb1<br />
lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part9 -> ../../sdb9<br />
[root@falcon by-id]$ zpool status<br />
pool: san<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using 'zpool upgrade'. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(5) for details.<br />
scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
[root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -Pn<br />
would update 'san' to the following configuration:<br />
san<br />
/dev/disk/by-id/ata-ST2000DM001-9YN164_W1E07E0G-part1<br />
/dev/disk/by-id/ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1<br />
/dev/disk/by-id/ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1<br />
/dev/disk/by-id/wwn-0x5000c500658cc2b2<br />
[root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -P<br />
[root@falcon by-id]$ zpool status<br />
pool: san<br />
state: ONLINE<br />
status: Some supported features are not enabled on the pool. The pool can<br />
still be used, but some features are unavailable.<br />
action: Enable all features using 'zpool upgrade'. Once this is done,<br />
the pool may no longer be accessible by software that does not support<br />
the features. See zpool-features(5) for details.<br />
scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
wwn-0x5000c500658cc2b2 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
<br />
<br />
= Git three stage web deployment =<br />
<br />
Im now using git to handle my web repos. <br />
<br />
Currently my server environment is not hosted live. The intention is to have a secure development environment containing the sandbox and stage environments. Once i fully explore this setup i will add the 3rd step to push stage to a live shared host using the same methods here. <br />
<br />
<br />
This set up uses /srv/http/sandbox for the development directory. If you insist on developing out of /home/<username>/public_html then you can either substitute all mention of sandbox in this wiki with it, or you can just turn your public_html folder into a symlink leading to /srv/http/sandbox after completing these exercises.<br />
<br />
<br />
The following features detail the environment we are setting up using this wiki:<br />
* Central location for git repositories (--separate-git-dir) /srv/http/repos/git<br />
* Development sandbox /srv/http/sandbox<br />
* Staging ground for final pre-live q/a testing /srv/http/stage<br />
<br />
<br />
Workflow:<br />
* clone your repo into your sandbox area, work on your files<br />
* commit changes from sandbox when each implementation or task is complete, then push those changes to the central sandbox repo (/srv/http/repo/git/website.com) where you will then be able to view them on dev(http://dev.website.com)<br />
* At some point, when ready to stage, a "push" from the central sandbox repo to a separate central stage repo (/srv/http/git/repo/website.com.stage) will be made <br />
* Hooks located in central stage repo will auto deploy to stage(/srv/http/stage/website) when push is ran . (This deploys only the web files, and NOT the hidden .git repo folder which would otherwise make your site vulnerable, hence the reason for this wiki. Otherwise it would be as easy as going to stage and cloning for example)<br />
<br />
== Apache adjustments ==<br />
set up a new system for dev and stage environments<br />
<br />
=== Edit vhost ===<br />
<br />
add or edit /etc/httpd/conf/extras/httpd-vhost.conf file to something similar<br />
<br />
# default route<br />
<VirtualHost *:80><br />
ServerName <hostname><br />
ServerAlias <hostname><br />
VirtualDocumentRoot "/srv/http"<br />
</VirtualHost><br />
<br />
#sandbox (dev) root route<br />
<Virtualhost *:80><br />
ServerName dev<br />
DocumentRoot "/srv/http/sandbox"<br />
</Virtualhost><br />
<br />
#sandbox (dev) subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName dev.sub.com<br />
ServerAlias dev.*<br />
VirtualDocumentRoot "/srv/http/sandbox/%2+"<br />
</VirtualHost><br />
<br />
#stage root route<br />
<Virtualhost *:80><br />
ServerName stage<br />
DocumentRoot "/srv/http/stage"<br />
</VirtualHost><br />
<br />
#stage subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName stage.sub.com<br />
ServerAlias stage.*<br />
VirtualDocumentRoot "/srv/http/stage/%2+"<br />
</VirtualHost><br />
<br />
restart apache <br />
systemctl restart httpd<br />
#or if your still using initscripts<br />
rc.d restart httpd<br />
<br />
=== Create dirs === <br />
<br />
touch /srv/http/index.php<br />
mkdir /srv/http/sandbox<br />
touch /srv/http/sandbox/index.php<br />
mkdir /srv/http/stage<br />
touch /srv/http/stage/index.php<br />
<br />
note, in order for this to work now you will need to add your websites with the same dir name as they will be when deployed, i.e. use dots when dots exists, e.g. if website is google.com the dirname must be google.com, so your dev site will end up being /srv/http/sandbox/google.com so that the vhost will be able to map to it using http://dev.google.com<br />
<br />
=== Edit hosts files ===<br />
<br />
Assuming your development machine is not the same as the server, you need to make sure to add an entry for each site, during creation of the site, to your hosts file. <br />
<br />
i will use server ip 192.168.1.99 and hostname 'myserver' as example, and google.com as our website url example, use your own host name's and website url's in place of them. <br />
<br />
* linux <br />
add the following lines to /etc/hosts<br />
192.168.1.99 <br />
192.168.1.99 dev myserver<br />
192.168.1.99 stage myserver <br />
192.168.1.99 dev.google.com myserver<br />
192.168.1.99 stage.google.com myserver<br />
<br />
* windows<br />
<br />
navigate to the following file logged in as admin, <br />
%WINDIR%/system32/drivers/etc/hosts <br />
<br />
or run cmd as administrator, <br />
paste thie following line there<br />
notepad %WINDIR%/system32/drivers/etc/hosts <br />
<br />
add your sites to the hosts file<br />
192.168.1.99 dev<br />
192.168.1.99 stage<br />
192.168.1.99 dev.google.com<br />
192.168.1.99 stage.google.com<br />
#dev.anothersite.com<br />
#stage.anothersite.com<br />
#etc...<br />
<br />
* mac, you get the idea...<br />
<br />
== Git usage ==<br />
Install git, and make the following changes<br />
<br />
=== Set up Git ===<br />
<br />
* Create a central location for the git repos. (note, ill use a group called 'webteam' for example. This group separates what apache needs permission to from what a web developer needs permission to. each web developer is a member of this group) <br />
$ su<br />
$ mkdir -p /srv/http/repos/git<br />
$ chown http:webteam /srv/http/* -R<br />
$ chmod 775 /srv/http/* -R<br />
<br />
*sandbox push global configs<br />
you will probably get an error stating that you cant push to the branch that you cloned out of, more research needs to be done on this, but it seems pretty straight forward. Set the following config variable to squelch this.<br />
$ git config --global receive.denyCurrentBranch warn #you can set it to false instead of warn once your sure it doesnt cause any problems<br />
<br />
* stage push global configs<br />
Git version 2.0+ will start using a new push.default standard called "simple". "simple" indicates that the push will refuse if the upstreams branch name is different than the local one, which in this case it is. (website.com vs. stage.website.com) If your version is less than 2.0, which to date isn't out yet, then we can make it forward compatible. Set the following config variable to squelch this.<br />
$ git config --global push.default matching<br />
<br />
=== Add or enroll a new site ===<br />
Once your Git environment is setup all you have to do is start from this point each time you want to create a new website. <br />
<br />
* add a website (or see below if you are enrolling an existing site)<br />
$ mkdir -p /srv/http/sandbox/<website.com> # -p incase you dont have a sandbox dir yet<br />
*alternatively enroll an existing website<br />
To enroll an existing website, move it to the /srv/http/sandbox directory, being sure to rename it to conform to the actual address that it will be accessible via live (naming is important here, e.g. if your folder is named website-com, mv it to website.com. The .com portion is not mandatory if there is no domain name intended for it, just know that you will be accessing it in your browser exactly the same way as you name it, unless you muck around with the vhosts file to customize your own mapping strategy. )<br />
$ mv /home/<username>/public_html/website-com /srv/http/sandbox/website.com<br />
<br />
*continue here once you relocated your website or added a new website to the sandbox<br />
$ cd /srv/http/sandbox/website.com<br />
$ git init --separate-git-dir=/srv/http/repos/git/<website><br />
$ git add -A<br />
$ git commit -m 'comment here'<br />
<br />
If your going to actively develop this site, then continue below to stagify the site. If your just archiving an old site for later development you can stop here, delete the site folder ( e.g./srv/http/sandbox/website.com) then "recreate the worktree", as detailed below, from the repo and stageify at a later time.<br />
<br />
=== Stageify ===<br />
* create empty stage repo for this site (start here if you already have an existing site and repo substitute sandbox with your site location)<br />
$ mkdir -p /srv/http/repos/git/website.com.stage<br />
$ cd /srv/http/repos/git/website.com.stage<br />
$ git init --bare<br />
<br />
* make sites stage dir and create the hook (while still in the stage repo)<br />
$ mkdir /srv/http/stage/website.com<br />
$ cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/srv/http/stage/website.com git checkout -f<br />
<br />
#press ctrl +d to save and exit cat<br />
<br />
$ chmod +x hooks/post-receive<br />
<br />
* Define stage mirror and create a master branch tree (you MUST run both the following commands from the sandbox repo, the sandbox checkout NOR the stage repo will suffice)<br />
$ cd /srv/http/repos/git/website.com #note, important that your in this exact path, as opposed to website.com.stage (if your sandbox .git dir is somewhere else then cd INTO it)<br />
$ git remote add stage.website.com /srv/http/repos/git/website.com.stage #(note, stage.website.com is just a name, its what we will use to declare as teh repo name that we will call when we stage it)<br />
$ git push stage.website.com +master:refs/heads/master<br />
<br />
* the push to stage will be run simply from here forward, from within your root code base.. (i.e. you dont need to be in the .git dir anymore)<br />
$ git push stage.website.com<br />
See workflow below for more examples on this.<br />
<br />
* In case of upstream branch error when you run the push, you can do the following.<br />
Since we have globally configured the push.default to matching, an upstream branch error may occur. To squelch this error we need to indicate our upstream branch. Since we named out branch "stage.website.com" we issue the following command from our sandbox repo (/srv/http/repos/git/website.com)<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push --set-upstream stage.website.com master<br />
<br />
== General work flow ==<br />
<br />
Once you have your environment all setup, you can follow a general work flow pattern in order to take advantage of the full functionality.<br />
<br />
=== New development process ===<br />
<br />
* If you plan to just archive your site, you may want to delete it from the sandbox once its archived to the git repository using the steps above, since its nicely compressed and packed away into your repo dir. If this is the case then go head and delete your sandbox web folder at this point so we can begin the flow from the furthest possible point. (note, only do this step if you have your website commited to the git repo as outlined above, i.e. your .git folder is NOT located inside your code base dir), '''otherwise continue "Work on files" below'''.<br />
$ cd /srv/http<br />
$ rm website.com -rf<br />
<br />
You might want to delete your worktree after putting a project on the back-burner, and are not planning on working on it for long periods of time. Of you may have a crowded project directory and want to get your sanity back. If you deleted your worktree previously, or are restoring an old site from a repo for whatever reason, and your project is stored only inside the compressed repo, then you will need to re-create the worktree from the repo before working on your files. The following methods will guide you through doing this. <br />
<br />
==== recreate worktree ====<br />
<br />
Note, you wont need to do these steps if you still have your worktree in place and never deleted it. <br />
<br />
===== method 1 ===== <br />
(preferred method)<br />
<br />
One way to get the worktree back is to extract the worktree out of the repo, then attach it to the repos branch<br />
* make the new site folder inside your sandbox<br />
$ mkdir /srv/http/sandbox/website.com<br />
<br />
* switch to the new worktree directory and ''init'' for the first time<br />
$ cd /srv/http/sandbox/website.com<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
alternatively you can run those last two commands on one line if its easier for you<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init && echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
<br />
* check the branch your on before pulling down the files<br />
$ git branch<br />
<br />
* switch to the repo and extract the files to the new site directory you just created. Make sure your checking out the branch you intend, normally that will be "master" unless you have branched the repo. Replace the word "master" in the following command if you are working on a different branch.<br />
$ cd /srv/http/repos/git/website.com<br />
$ git archive master | tar -x -C /srv/http/sandbox/website.com<br />
note, if you get an error in the above step <br />
could not switch to '/some/dir'': No Such File or directory<br />
then you may be trying to get the files back to a directory other than what was used when the repo was made, ''otherwise continue below''. <br />
<br />
If you did receive this error then you need to edit the config file in the repo to point to the new directory. To do this, open the ./config file in your favorite editor and edit the worktree path to point to the new site directory that you created.<br />
<br />
===== method 2 (alt method) ===== <br />
Another way to get the worktree back is to clone it out from the main repo, then delete the .git dir, then create the .git symlink. This way just seems a bit messy because one needs to delete the .git dir after unecessarily creating it, but as long as you replace it with the proper symlink it works fine. <br />
<br />
* clone the site to sandbox<br />
$ cd /srv/http/sandbox<br />
$ git clone /srv/http/repos/git/website.com<br />
* delete the hidden .git folder that was created inside this worktree<br />
$ rm -rf .git<br />
* recreate the .git symlink <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
* checkout master to get things back on track (is this step even necessary?)<br />
$ git checkout master<br />
<br />
==== Work on files ====<br />
<br />
* Work on files<br />
$ cd website.com <br />
$ git status<br />
edit your files.....<br />
<br />
==== commit the changes ====<br />
<br />
$ git status #notice files are not staged for commit<br />
$ git add -A # or just individual files (e.g. git add file1.php file2.php) etc..<br />
$ git status # make sure the files you want to commit are listed to be committed<br />
$ git commit -m 'this is what i did to these files'<br />
$ git status # should come back clean<br />
$ git log # you can see your commit comments here<br />
<br />
visit http://dev.website.com to test the new functionality that was just committed<br />
<br />
If all looks well, you need to push your changes to the sandbox branch master(repo) that you cloned from. This accomplishes two things<br />
<br />
* liberates your sandbox clone to be deleted at will<br />
* updates branch master so the files are waiting to be pushed(deployed) to stage<br />
$ git push<br />
<br />
=== Deploy to stage ===<br />
Once your separate tasks have been individually commited, and you have thoroughly tested then in the sandbox environment, you can now deploy to the stage environment for final q/a before going live. <br />
<br />
*deploy to stage<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push stage.website.com<br />
<br />
now visit http://stage.website.com to do your final q/a<br />
<br />
If any mistakes are spotted on stage, i.e. if everything is not working as expected, then your stage environment just paid for itself. Follow the below steps for damage control to revert changes back on stage <br />
<br />
=== Damage control ===<br />
<br />
If mistakes are spotted on stage, we need to revert our changes<br />
<br />
* @todo: commands for reverting<br />
* @todo: workflow<br />
<br />
<br />
now that stage has been reset to the last-known-good, go back to the sandbox and edit<br />
<br />
== Why separate repository from web folder? ==<br />
<br />
For me, it just makes sense. The beauty of it is you can bulk delete your all or any of your web files in the sandbox area when they start to get in the way. When your ready to work on that site again, you can just "recreate the worktree".<br />
<br />
It may also help to have the repos outside of the web folders if you run any type of incremental backups, e.g. rsnapshot, or rsync. You can choose to backup only the repos, which are compressed, or to backup everything except sandbox for example. Mainly the fact that you can delete your sandbox websites without losing the repo is the main reason.<br />
<br />
== Credits ==<br />
Thanks to <br />
* Abhijit Menon-Sen at http://toroid.org/ams/git-website-howto which is where all my google searches finally landed me and payed-off as to what i was trying to accomplish.<br />
<br />
* niks http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked for getting <br />
and<br />
Charles Bailey http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked <br />
for getting me over the hump of getting the worktree properly out of a repo archive<br />
<br />
= ZFS-FUSE Implementation =<br />
<br />
Im trying to utilize the ZFS filesystem to create a flexible array of different sized drives that will serve as a data-backup volume shared across a samba network. ZFS can be taken advantage of here because of its flexibility using storage pools. ZFS also has some of the best features catering to data integrity. One drawback is some of this is done at the expense of file transfer rates. There are some things you can do to offset this however by having small backup drives, even usb stick or just about any other type of media serving as a cache drive. Speed will not be a consideration on this array since its main role is data integrity and safety. <br />
<br />
The test array initially used here was an array of 3 dives, one 2Tb, and 2 500GB's. LVM was also explored to find ways to span drives into one large volume. <br />
<br />
== Installation ==<br />
<br />
1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory. <br />
<br />
went to AUR, copy and pasted the PKGBUILD contents into a new PKGBUILD file in my packages directory /home/wolfdogg/packages/packages/yaourt/PKGBUILD<br />
ran the makepkg -s from that folder. after battling a successtion of errors (invalid signatures in pacman, needed to re-init pacman-key, then had to create new group called sudo, and add my user to it, then edit the /etc/sudoers file accordingly, which seemed like the best way to get my user into sudoers, and eliminate the risk of accidentally doing something from root during the makepkg process)<br />
and finally, pacman -U yaourt<br />
<br />
2)Then i followed the steps on the wiki on https://wiki.archlinux.org/index.php/ZFS_on_FUSE . For myself, the NFS portion got a bit confusing here https://wiki.archlinux.org/index.php/ZFS_on_FUSE#NFS_shares . Here in this section of the wiki [code]zfs set sharenfs=[/code] i got a bit side-tracked and will come back to it at a later time. <br />
<br />
I found the manual for Solaris ZFS here [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/] and it looks like it will provide all the info needed, especially this section of it [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s03.html]<br />
<br />
Now would be a good time to read the manuals<br />
#man zpool<br />
#man zfs<br />
<br />
== Inventory drives ==<br />
<br />
*Hook up any drives that you want to add to your new filesystem. <br />
<br />
*Take an inventory of your current drives and partitios. Note any existing arrays and partitions. See md0 (mdadm array), zfs-kstat/pool (a zfs pool), and /pool/backup (a dataset) in the example below.<br />
<br />
*View the drive information<br />
# blkid -o list -c /dev/null<br />
<br />
*List the partitions, and inspect your mounts<br />
# lsblk -f<br />
<br />
sdb 8:16 0 1.8T 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
sdc 8:32 0 465.8G 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
<br />
*Use one of the following for each disk if you want to view partition tables and or reformat your drives:<br />
*If you have a MBR partition table <br />
#fdisk -l<br />
*If you have GPT partition table<br />
#gdisk -l /dev/sdb<br />
<br />
*View the mounts<br />
# findmnt<br />
<br />
TARGET SOURCE FSTYPE OPTIONS<br />
/zfs-kstat kstat fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other<br />
└─/pool pool fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
└─/pool/backup pool/backup fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
<br />
== Prepare Drives ==<br />
If you already know what your doing, you can use cgdisk, here is a nice link for cgdisk reference [http://www.rodsbooks.com/gdisk/cgdisk-walkthrough.html]. Otherwise, follow below to see a gdisk example walkthrough.<br />
# gdisk<br />
Type device or filename<br />
We will use /dev/sdb for this example<br />
# /dev/sdb<br />
Command:<br />
o will create a new partition table<br />
# o<br />
Proceed? <br />
This option will delete all partition and create a new protective MBR<br />
# y<br />
Command:<br />
n will create a new partition<br />
# n<br />
Partition number<br />
We should use 1 if its the first<br />
# 1<br />
Press enter for default first sector<br />
Press enter for default last sector<br />
Hexcode: <br />
# L<br />
L Shows all codes, choose a filesysetm hex code and type it in<br />
We will go with bf00 (Solaris root) for this example<br />
# bf00<br />
Now lets write partition to disk<br />
COmmand:<br />
# w<br />
Final checks complete<br />
Do you want to proceed<br />
# y<br />
<br />
Now your disk has a new clean partition table<br />
<br />
Here are some steps to follow, so far its all i have.<br />
<br />
=== Create the Zpool ===<br />
<br />
Create the pool (notice im not using raidz here, you may want to if your drives are all the same size)<br />
# zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd /dev/sde<br />
*Also note, you should create two pools if you have 4 or more drives so that the redundancy portion that zfs is so well known for will work properly.<br />
<br />
"pool" is the name of the current test pool, and the mount point.<br />
<br />
'''RAIDZ1, RAIDZ2'''<br />
<br />
*To create a RAIDZ if you have atleast 3 drives of the same size <br />
*'raidz' is an alias for 'raidz1', or you can use 'raidz2' if you have an extra drive for an extra stripe for extra redundancy.)<br />
# zpool create pool raidz /dev/sdb /dev/sdc /dev/sdd<br />
<br />
'''Alternatively, create a raid span.''' Note, no raid type has been specified (disk, file, raidz, etc...)<br />
# zpool create pool /dev/sdb /dev/sdc /dev/sdd<br />
<br />
(see below to create an mdadm linear span instead (jbod)) <br />
<br />
if you do some testing and get stuck with a drive reporting unavailable after you hook it back up, sometimes i have to reboot which i probably just don't know enough about yet, but some commands that have been helpful<br />
<br />
Get the list and size of the pool<br />
# zpool list<br />
<br />
To get the status<br />
#zpool status<br />
<br />
=== Create the ZFS Filesystem Hierarchy (dataset) ===<br />
<br />
Lets create a dataset <br />
# zfs create pool/backup<br />
# zfs set mountpoint=/backup pool/backup<br />
<br />
you can create a child file system inside the parent, the child file system is would automatically be inherited <br />
# zfs create pool/backup/<computer_name><br />
<br />
Per the manual, ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /dev/vfstab)<br />
<br />
=== List and inspect ===<br />
<br />
List and inspect your new zfs file system<br />
# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
pool 106K 2.68T 21K /pool<br />
pool/backup 21K 2.68T 21K /pool/backup<br />
<br />
Now lets look at all drives to see what this looks like. View mount status and disk size<br />
# mount -l<br />
<br />
# df<br />
Filesystem 1K-blocks Used Available Use% Mounted on<br />
rootfs 15087420 2356784 11962328 17% /<br />
dev 1991168 0 1991168 0% /dev<br />
run 2027072 316 2026756 1% /run<br />
/dev/sda3 15087420 2356784 11962328 17% /<br />
shm 2027072 0 2027072 0% /dev/shm<br />
tmpfs 2027072 28 2027044 1% /tmp<br />
/dev/sda4 95953460 84172112 6904016 93% /home<br />
/dev/sda1 99550 19445 74886 21% /boot<br />
pool 2873622443 23 2873622420 1% /pool<br />
pool/backup 2873622441 21 2873622420 1% /pool/backup<br />
<br />
To learn more about your coinfiguration options run the following command<br />
# zfs get all | less<br />
<br />
=== Destroy Array and Pools ===<br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
If you need to break down your zpool or zfs datasets follow below. <br />
<br />
Deactivate the array using mdadm RAID manager (unmount)<br />
# mdadm -S /dev/md0<br />
<br />
I chose to delete the line in /etc/mdadm.conf as well. not sure if it was needed but it seemed that there was still bits and pieces lying around that needed removing<br />
# vim /etc/mdadm.conf<br />
<br />
To destroy a dataset in the pool <br />
#zfs destroy <filesystemvolume><br />
<br />
Or you can destroy the entire pool<br />
#zpool destroy <pool><br />
<br />
If you cant totaly destroy the pool, or are trying to create a new pool with the same name its possible to trace clues about what process is using it using <br />
#fuser /pool -a <br />
<br />
Then run top to find that process PID<br />
#top<br />
<br />
Or run lsof<br />
# lsof | zfs-fuse | less<br />
Go from there....<br />
<br />
Now move on to [[#Linear_RAID_.28jbod.29_filesystem_using_mdadm_.2F_lvm]] or [[#RAIDZ_Filesystem_Configuration]]<br />
<br />
== Linear RAID (jbod) filesystem using mdadm / lvm ==<br />
<br />
I decided to explore other options until i can figure out wether the ZFS span is set up properly using ZFS. <br />
<br />
=== Create the array ===<br />
<br />
Destroy existing zfs pool and break down the array where applicable. use this as a guide <br />
[[#Destroy_Array_and_Pools]]<br />
<br />
Use mdadm to create the span<br />
# mdadm --create /dev/md0 --level=linear --raid-devices=3 /dev/sd[bcd]<br />
<br />
Get the status detail using mdadm <br />
<br />
# mdadm --misc --detail /dev/md0<br />
/dev/md0:<br />
Version : 1.2<br />
Creation Time : Mon Jun 25 13:52:31 2012<br />
Raid Level : linear<br />
Array Size : 2930286671 (2794.54 GiB 3000.61 GB)<br />
Raid Devices : 3<br />
Total Devices : 3<br />
Persistence : Superblock is persistent<br />
Update Time : Mon Jun 25 13:52:31 2012<br />
State : clean<br />
Active Devices : 3<br />
Working Devices : 3<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
Rounding : 0K<br />
Name : falcon:0 (local to host falcon)<br />
UUID : 141f34d0:2b2c0973:4a6f070b:17b772ec<br />
Events : 0<br />
Number Major Minor RaidDevice State<br />
0 8 16 0 active sync /dev/sdb<br />
1 8 32 1 active sync /dev/sdc<br />
2 8 48 2 active sync /dev/sdd<br />
<br />
=== Create volume and groups ===<br />
<br />
Create the physical volume<br />
# pvcreate /dev/md0<br />
<br />
Display the volume<br />
# pvdisplay<br />
"/dev/md0" is a new physical volume of "2.73 TiB"<br />
--- NEW Physical volume ---<br />
PV Name /dev/md0<br />
VG Name<br />
PV Size 2.73 TiB<br />
Allocatable NO<br />
PE Size 0<br />
Total PE 0<br />
Free PE 0<br />
Allocated PE 0<br />
PV UUID 1Hr3ay-L0mZ-33GD-4ZeM-EOtW-lkAz-M4REMJ<br />
<br />
Create the volume group<br />
# vgcreate VolGroupArray /dev/md0<br />
Volume group "VolGroupArray" successfully created<br />
<br />
Display the volume groups<br />
# vgdisplay<br />
--- Volume group ---<br />
VG Name VolGroupArray<br />
System ID<br />
Format lvm2<br />
Metadata Areas 1<br />
Metadata Sequence No 1<br />
VG Access read/write<br />
VG Status resizable<br />
MAX LV 0<br />
Cur LV 0<br />
Open LV 0<br />
Max PV 0<br />
Cur PV 1<br />
Act PV 1<br />
VG Size 2.73 TiB<br />
PE Size 4.00 MiB<br />
Total PE 715401<br />
Alloc PE / Size 0 / 0<br />
Free PE / Size 715401 / 2.73 TiB<br />
VG UUID 2OAWpT-fO50-A7cW-jUjd-meQh-sWQa-7b55ZO<br />
<br />
Create the logical volume, i used 2.725 because 2.73 failed<br />
# lvcreate VolGroupArray -L 2.725T -n backup<br />
<br />
Display volume<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
Check the status <br />
# cat /proc/mdstat<br />
Personalities : [linear]<br />
md0 : active linear sdd[2] sdc[1] sdb[0]<br />
2930286671 blocks super 1.2 0k rounding<br />
unused devices: <none><br />
<br />
''@TODO Need advice here - this portion needs testing, i got stuck here last time it tried this.''<br />
<br />
== Recreating a ZFS storage pool ==<br />
<br />
From my understanding one can add a drive, to a vdev (Virtual Device, or array set) without losing data, if its a linear span, but you cant remove a drive from the vdev without first copying the files and then deleting the pool then recreating it. This portion will walk you through completely tearing down the array, freeing up the drives, and recreating either a mdadm jbod linear span or a RAIDZ. <br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
=== List the pools ===<br />
<br />
get a list of the current pools<br />
#zpool list<br />
<br />
check status <br />
# zpool status<br />
<br />
list datasets<br />
#zfs list<br />
<br />
== Share mount to network using samba ==<br />
<br />
Configure samba to give user access to the share (good for network backups from any operating system, including windows backup) <br />
*add the following entry to /etc/samba/smb.conf and restart the samba daemon<br />
[backup]<br />
comment = backup drive<br />
path = /pool/backup<br />
valid users = user1,user2<br />
read only = No<br />
create mask = 0765<br />
wide links = Yes<br />
<br />
#rc.d restart samba<br />
<br />
modify the permissions on pool/backup so they are accesible over the network, i decided to add 'user' group accessibility<br />
# chown root:users backup<br />
# chmod g+w backup<br />
<br />
Alternatively<br />
# chown root:root backup/<br />
# chmod 755 backup/<br />
# chown root:users backup/<dataset1><br />
# chmod 775 backup/<dataset1><br />
<br />
== ZFS RAID Maintenance ==<br />
<br />
@TODO - this section is not finished<br />
<br />
==== Maintenance ====<br />
<br />
*To place a disk back online (see manual for this)<br />
#zpool online<br />
<br />
*To replace a disk (see manual for this)<br />
#zpool replace<br />
<br />
* If your pool goes down, i.e. one of your drives goes offline and not enough drives to complete replication you can try the following<br />
- Check if the drive is initiated, you should see it in the list thats returned from running the following command<br />
# blkid<br />
If its not in the list:<br />
- Check all cable connections, the drive may not be mounted. A reboot may be needed. <br />
- Once you get the drive to appear then run following commands.<br />
# zpool export <pool><br />
# zpool import <pool><br />
# zpool list<br />
If you get an error message <br />
"cannot import 'pool': one of more devices is currently unavailable. Destroy and re-create the pool from a backup source" try to export then import again.<br />
@todo more information is needed before suggestions are made at this point.<br />
<br />
== Help Needed ==<br />
<br />
If anybody has any suggestions, please chime in. <br />
<br />
I havent figured out how to adress the size of the array yet. For example, when i hook up only the 2TB with the 500GB as a raidz, it will let me, and the size reports 1.36TB. When i destroyed the pool and rebuilt using all 3 drives (2TB,500GB,500GB) the size still reports 1.36TB<br />
<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 00:18, 25 June 2012 (UTC)<br />
<br />
= volnoti on KDE using alsamixer =<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.<br />
<br />
= smbclient media stream issues using dolphin when accessing windows shares =<br />
<br />
== description ==<br />
I was having access problems when trying to access files via smbclient, or samba as a client, accessing shares on a win7 file server, '''only''' when the user exists on the windows machine. It turns out the problem is possibly either a complicated mounting issue, or a deep bug in dolphin. <br />
<br />
Before i delve into this, i also wanted to mention, the windows machine was set to not share files as "password protected sharing" http://www.sevenforums.com/tutorials/185429-password-protected-sharing-turn-off-windows-7-a.html. If you DO share with windows password protected sharing, when you acccess the samba shares through dolphin, it will issue a popup asking you for a password, even if you put in valid credentials you will still experience the issue, so dont spend too much time debugging by turning on and off password protected sharing as it didnt seem to help either way.<br />
<br />
== the bug reproduced ==<br />
The way i was accessing the shares was through dolphin, by clicking on on the "Network" places, then by clicking into the "Samba" symlink. There i would see a list of workgroups, click into those to access my files, and so forth. I would find a directory i wanted to add to My "Places", i would right click on it and add to my places. <br />
<br />
When i would access these media files through either the symlink i created in my Places, or through the existing Network symlink in my places, once i navigated to a video file, .avi in this case, no matter if i would choose VLC, or mplayer, the system would need to cache in full before it would play. This meant instead of the video starting right away, i would have to wait sometimes 10-15 minutes before the video would start, or i would get an error(VLC is unable to open the MRL 'smb://<server>/UserFiles/PublicArchive/movie.avi), depending on how the windows password protect was set, or if i was using vlc vs. mplayer, etc... Obviously something was wrong. Now i'm sure these symptoms ran much deeper than just video files, for example, i remember it happening to audio files, but i think it will probably happen to even text files or any file thats notible large enough to take more than 3 seconds to access across a network. <br />
<br />
Important note, this only happens when im logged into KDE as a user that already exists on the windows system im trying to access the publicly shared files. If i were to log in to KDE as a user that doesn't exist the files would not have to cache, but instead would immediately start to stream as expected. <br />
<br />
Something really buggy is going on at this point. So i thought maybe i would go into the KDE system settings > sharing and set the default username and password but this didn't help. i tried several things, including re-installing network driver, re-installing kde over the top of itself, userdel the user, clean out the home directory, nothing worked. <br />
<br />
So once again, as i have done in the past to try to solve this issue, i decided i would try to manually mount the thing again and gain access differently that using the network icon in dolphin. This time i was following the wiki as usual, and i got to the part about manually mounting shares where i stumbled on one line that mentioned the /mnt directory that i have seen so many times before. This time the cards were lined up right i guess because i decided to click through "Root" ((in places) using dolphin, then navigated my way through /mnt/smbnet and onto my files this way where i discovered they play no problem (doesnt need to cache, starts streaming immediately).<br />
<br />
== the fix ==<br />
It appears that the symlink "Network" in Dolphin 'Places' bar, at least the way its currently set up in my version of kde4, is there to make life miserable. Dont use it if you dont want to have to fully download the file before your system will have access to it. Don't access your network shares this way if they are on a samba share. Instead, navigate through Dolphins "Root" /mnt/smbnet/<your-workgroup>/<your-server>/<your-fileshares> instead, and right click on one of those folders and "Add To Places", then you will have proper a proper symlink on your "Places" to access through /mnt.<br />
<br />
= steps taken to repair Intel IbexPeak HDMI / IDT 92HD81B1X5 internal mic =<br />
<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=F1734<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#2 | grep Codec<br />
cat: /proc/asound/card0/codec#2: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#1 | grep Codec<br />
cat: /proc/asound/card0/codec#1: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec<br />
cat: /proc/asound/card0/codec: No such file or directory<br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card0/<br />
[root@osprey card0]# ll<br />
total 0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#3<br />
-rw-r--r-- 1 root root 0 Apr 6 00:49 eld#3.0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 id<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0c<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0p<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm3p<br />
[root@osprey card0]# cat /proc/asound/card0/codec#0 | grep Codec<br />
Codec: IDT 92HD81B1X5<br />
[root@osprey card0]# cat /proc/asound/card0/codec#3 | grep Codec<br />
Codec: Intel IbexPeak HDMI<br />
[root@osprey card0]# cat /proc/asound/card0/eld#3.0 | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/id | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# pacman -S gstreamer0.10-plugins<br />
:: There are 5 members in group gstreamer0.10-plugins:<br />
:: Repository extra<br />
1) gstreamer0.10-bad-plugins 2) gstreamer0.10-base-plugins 3) gstreamer0.10-ffmpeg<br />
4) gstreamer0.10-good-plugins 5) gstreamer0.10-ugly-plugins <br />
<br />
Enter a selection (default=all): <br />
warning: gstreamer0.10-bad-plugins-0.10.23-3 is up to date -- reinstalling<br />
warning: gstreamer0.10-base-plugins-0.10.36-1 is up to date -- reinstalling<br />
warning: gstreamer0.10-ffmpeg-0.10.13-1 is up to date -- reinstalling<br />
resolving dependencies...<br />
looking for inter-conflicts...<br />
<br />
Targets (10): gstreamer0.10-ugly-0.10.19-5 libavc1394-0.5.4-1 libiec61883-1.2.0-3<br />
libsidplay-1.36.59-5 wavpack-4.60.1-2 gstreamer0.10-bad-plugins-0.10.23-3<br />
gstreamer0.10-base-plugins-0.10.36-1 gstreamer0.10-ffmpeg-0.10.13-1<br />
gstreamer0.10-good-plugins-0.10.31-1 gstreamer0.10-ugly-plugins-0.10.19-5<br />
<br />
Total Download Size: 1.00 MiB<br />
Total Installed Size: 13.08 MiB<br />
Net Upgrade Size: 3.63 MiB<br />
<br />
Proceed with installation? [Y/n] <br />
:: Retrieving packages from extra...<br />
gstreamer0.10-base-plugi... 165.3 KiB 963K/s 00:00 [#############################] 100%<br />
libavc1394-0.5.4-1-x86_64 32.0 KiB 759K/s 00:00 [#############################] 100%<br />
libiec61883-1.2.0-3-x86_64 37.3 KiB 829K/s 00:00 [#############################] 100%<br />
wavpack-4.60.1-2-x86_64 113.7 KiB 921K/s 00:00 [#############################] 100%<br />
gstreamer0.10-good-plugi... 327.3 KiB 1124K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-0.10.... 160.4 KiB 908K/s 00:00 [#############################] 100%<br />
libsidplay-1.36.59-5-x86_64 107.8 KiB 771K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-plugi... 84.4 KiB 727K/s 00:00 [#############################] 100%<br />
(10/10) checking package integrity [#############################] 100%<br />
(10/10) loading package files [#############################] 100%<br />
(10/10) checking for file conflicts [#############################] 100%<br />
(10/10) checking available disk space [#############################] 100%<br />
( 1/10) upgrading gstreamer0.10-bad-plugins [#############################] 100%<br />
( 2/10) upgrading gstreamer0.10-base-plugins [#############################] 100%<br />
( 3/10) upgrading gstreamer0.10-ffmpeg [#############################] 100%<br />
( 4/10) installing libavc1394 [#############################] 100%<br />
( 5/10) installing libiec61883 [#############################] 100%<br />
( 6/10) installing wavpack [#############################] 100%<br />
( 7/10) installing gstreamer0.10-good-plugins [#############################] 100%<br />
<br />
(gconftool-2:5520): GConf-WARNING **: Client failed to connect to the D-BUS daemon:<br />
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. <br />
( 8/10) installing gstreamer0.10-ugly [#############################] 100%<br />
( 9/10) installing libsidplay [#############################] 100%<br />
(10/10) installing gstreamer0.10-ugly-plugins [#############################] 100%<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
<br />
= Git Remote Development =<br />
<br />
==Configure remote==<br />
===Set up remote repos===<br />
<br />
*Set up the bare repo and public served repo (for websites)<br />
<br />
$ ssh user@remote<br />
# mkdir /home/user/git/site.com.git (or /var/www/git/site.com etc.. if not shared hosting.)<br />
# cd /home/user/git/site.com.git<br />
# git init --bare --shared (or not shared)<br />
<br />
*Choose only option a or option b in the following steps<br />
<br />
*Now chooose one of the following, either a or b, dont do them both. <br />
**a) If you want to always edit files locally and not have to do a pull from served location, which pretty much automates the file uploads upon push. <br />
**b) If you want the ability to also edit files directly on the served location as well then you might want to keep them in their own git repository, therefore you will be cloning the bare repo out to the sites served directory. Each time you do a push from your local, you will then need to do a pull from this remote cloned repo after each commit before you will see your changes live. <br />
<br />
===a) Post-receive hooks for automation===<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server''' <br />
<br />
*Make post-receive hook in bare repo which will ship files automatically into its served location<br />
<br />
# cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/home/soldiert/public_html/project.com/ git checkout -f <br />
// (press ctrl_d to exit and save)<br />
<br />
# chmod +x hooks/post-receive (same path as what you put in hooks/post-receive)<br />
# mkdir /home/user/git/site.com<br />
# exit<br />
<br />
===b) Clone for more control====<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server'''<br />
<br />
*Clone remote bare into directory that it will be served out of to gain more control over editing from both locations. <br />
<br />
# cd /home/user/public_html<br />
# clone /home/user/git/site.com.git <br />
# exit<br />
<br />
=== Start tracking on local===<br />
*Start tracking on local if you haven't already<br />
$ cd ~/projects/site.com<br />
# git --init<br />
<br />
===Now set up connection to remote origin on your local git repo===<br />
$ cd ~/projects/site.com<br />
$ git remote origin -v <br />
<br />
*Add your bare repo as the new origin. <br />
*Note, if you your origin is already in tact then skip the remove and add origin steps below<br />
$ git remote rm origin (if its still attached to git hub or someplace else that you dont want the files going)<br />
$ git remote add origin user@remote:/home/user/public_html/site.com/.git<br />
<br />
*Push your files<br />
$ git push origin master (will push to bare repo, then hook will check it out to served location. <br />
<br />
===Ready to develop on local=== <br />
$ cd ~/projects/site.com<br />
$ vim index.html<br />
$ git add -A<br />
$ git commit -am 'first commit message'<br />
$ git status<br />
$ git log<br />
<br />
===After commiting run the following===<br />
$ git push<br />
<br />
*Your'e done. <br />
<br />
*If you chose option b then you now you need to run the following commands<br />
$ ssh user@remote<br />
# cd /home/user/public_html/site.com<br />
# git pull<br />
*Your'e done.<br />
<br />
= PHP X-Debug=<br />
<br />
This topic covers installing X-Debug on a LAMP server.<br />
<br />
== Installation ==<br />
pacman -S xdebug<br />
== Configuration ==<br />
* add the following line to your php.ini on the bottom of the extensions list<br />
zend_extension="/lib64/php/modules/xdebug.so"<br />
* or use the non 64 bit one if needed<br />
zend_extension="/lib/php/modules/xdebug.so"<br />
* restart your server<br />
systemctl restart httpd<br />
* ensure x-debug is enabled in phpinfo<br />
php -i | grep xdebug | less<br />
<br />
== PHPStorm usage ==<br />
This step details configuring xdebug for use in development on a development machine, separate from the LAMP server, which has PHP-Storm installed. <br />
== References ==<br />
<br />
xdebug docs<br />
http://xdebug.org/docs/<br />
<br />
xdebug checker http://xdebug.org/find-binary.php<br />
<br />
troubleshooting info<br />
http://stackoverflow.com/questions/20752260/trouble-setting-up-and-debugging-php-storm-project-from-existing-files-in-mounte<br />
<br />
phpstorm zero configuration<br />
http://blog.jetbrains.com/phpstorm/2013/07/webinar-recording-debugging-php-with-phpstorm/<br />
<br />
phpstorm configurations<br />
https://www.jetbrains.com/phpstorm/webhelp/configuring-xdebug.html</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=429564ZFS/Virtual disks2016-04-04T05:03:31Z<p>Wolfdogg: /* Mirror */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives with redundancy, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. Mirroring can also be used as an alternative to Raidz setups with surprising results. See more on vdev mirroring here [http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/]<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
<br />
$ zfs list<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
# zfs get all zpool<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
Note, this simple but important command is missing frequently from other articles on the subject, so its worth mention.<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|$ zfs list -t snapshot}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
If your pool was named <pool><br />
# zpool import pool<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export pool<br />
# zfs import pool</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Apache_HTTP_Server&diff=416561Apache HTTP Server2016-01-22T02:30:35Z<p>Wolfdogg: Undo revision 416558 by Wolfdogg (talk)</p>
<hr />
<div>[[Category:Web server]]<br />
[[cs:Apache HTTP Server]]<br />
[[de:LAMP Installation]]<br />
[[el:Apache HTTP Server]]<br />
[[es:Apache HTTP Server]]<br />
[[fr:Lamp]]<br />
[[it:Apache HTTP Server]]<br />
[[ja:LAMP]]<br />
[[pl:Apache HTTP Server]]<br />
[[ru:Apache HTTP Server]]<br />
[[sr:Apache HTTP Server]]<br />
[[tr:LAMP]]<br />
[[zh-cn:Apache HTTP Server]]<br />
{{Related articles start}}<br />
{{Related|PHP}}<br />
{{Related|MySQL}}<br />
{{Related|PhpMyAdmin}}<br />
{{Related|Adminer}}<br />
{{Related|Xampp}}<br />
{{Related|mod_perl}}<br />
{{Related articles end}}<br />
The [[Wikipedia:Apache HTTP Server|Apache HTTP Server]], or Apache for short, is a very popular web server, developed by the Apache Software Foundation.<br />
<br />
Apache is often used together with a scripting language such as PHP and database such as MySQL. This combination is often referred to as a [[Wikipedia:LAMP (software bundle)|LAMP]] stack ('''L'''inux, '''A'''pache, '''M'''ySQL, '''P'''HP). This article describes how to set up Apache and how to optionally integrate it with [[PHP]] and [[MySQL]].<br />
<br />
== Installation ==<br />
[[Install]] the {{Pkg|apache}} package.<br />
<br />
== Configuration ==<br />
Apache configuration files are located in {{ic|/etc/httpd/conf}}. The main configuration file is {{ic|/etc/httpd/conf/httpd.conf}}, which includes various other configuration files.<br />
The default configuration file should be fine for a simple setup. By default, it will serve the directory {{ic|/srv/http}} to anyone who visits your website.<br />
<br />
To start Apache, start {{ic|httpd.service}} [[systemd#Using units|using systemd]].<br />
<br />
Apache should now be running. Test by visiting http://localhost/ in a web browser. It should display a simple index page.<br />
<br />
For optional further configuration, see the following sections.<br />
<br />
=== Advanced options ===<br />
These options in {{ic|/etc/httpd/conf/httpd.conf}} might be interesting for you:<br />
<br />
User http<br />
:For security reasons, as soon as Apache is started by the root user (directly or via startup scripts) it switches to this UID. The default user is ''http'', which is created automatically during installation.<br />
<br />
Listen 80<br />
:This is the port Apache will listen to. For Internet-access with router, you have to forward the port.<br />
<br />
:If you want to setup Apache for local development you may want it to be only accessible from your computer. Then change this line to {{ic|Listen 127.0.0.1:80}}.<br />
<br />
ServerAdmin you@example.com<br />
:This is the admin's email address which can be found on e.g. error pages.<br />
<br />
DocumentRoot "/srv/http"<br />
:This is the directory where you should put your web pages.<br />
<br />
:Change it, if you want to, but do not forget to also change {{ic|<Directory "/srv/http">}} to whatever you changed your {{ic|DocumentRoot}} to, or you will likely get a '''403 Error''' (lack of privileges) when you try to access the new document root. Do not forget to change the {{ic|Require all denied}} line to {{ic|Require all granted}}, otherwise you will get a '''403 Error'''. Remember that the DocumentRoot directory and its parent folders must allow execution permission to others (can be set with {{ic|chmod o+x /path/to/DocumentRoot}}), otherwise you will get a '''403 Error'''.<br />
<br />
AllowOverride None<br />
:This directive in {{ic|<Directory>}} sections causes Apache to completely ignore {{ic|.htaccess}} files. Note that this is now the default for Apache 2.4, so you need to explicitly allow overrides if you plan to use {{ic|.htaccess}} files. If you intend to use {{ic|mod_rewrite}} or other settings in {{ic|.htaccess}} files, you can allow which directives declared in that file can override server configuration. For more info refer to the [http://httpd.apache.org/docs/current/mod/core.html#allowoverride Apache documentation].<br />
<br />
{{Tip|If you have issues with your configuration you can have Apache check the configuration with: {{ic|apachectl configtest}}}}<br />
<br />
More settings can be found in {{ic|/etc/httpd/conf/extra/httpd-default.conf}}:<br />
<br />
To turn off your server's signature:<br />
ServerSignature Off<br />
<br />
To hide server information like Apache and PHP versions:<br />
ServerTokens Prod<br />
<br />
=== User directories ===<br />
<br />
User directories are available by default through http://localhost/~yourusername/ and show the contents of {{ic|~/public_html}} (this can be changed in {{ic|/etc/httpd/conf/extra/httpd-userdir.conf}}).<br />
<br />
If you do not want user directories to be available on the web, comment out the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
<br />
Include conf/extra/httpd-userdir.conf<br />
<br />
{{Accuracy|It is not necessary to set {{ic|+x}} for every users, setting it only for the webserver via ACLs suffices (see [[Access Control Lists#Granting execution permissions for private files to a Web Server]]).}}<br />
<br />
You must make sure that your home directory permissions are set properly so that Apache can get there. Your home directory and {{ic|~/public_html}} must be executable for others ("rest of the world"):<br />
<br />
$ chmod o+x ~<br />
$ chmod o+x ~/public_html<br />
$ chmod -R o+r ~/public_html<br />
<br />
Restart {{ic|httpd.service}} to apply any changes. See also [[Umask#Set the mask value]].<br />
<br />
=== TLS/SSL ===<br />
{{pkg|openssl}} provides TLS/SSL support and is installed by default on Arch installations.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, uncomment the following three lines:<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
Include conf/extra/httpd-ssl.conf<br />
<br />
Create a private key and self-signed certificate. This is adequate for most installations that do not require a [[wikipedia:Certificate_signing_request|CSR]]:<br />
<br />
# cd /etc/httpd/conf<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt -days 1095<br />
# chmod 400 server.key<br />
# chmod 444 server.crt<br />
<br />
{{Note|The -days switch is optional and RSA keysize can be as low as 2048 (default).}}<br />
<br />
Then make sure the {{ic|SSLCertificateFile}} and {{ic|SSLCertificateKeyFile}} lines in {{ic|/etc/httpd/conf/extra/httpd-ssl.conf}} point to the key and certificate you have just created.<br />
<br />
If you need to create a [[wikipedia:Certificate signing request|CSR]], follow these keygen instructions instead of the above:<br />
<br />
# openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out server.key<br />
# chmod 400 server.key<br />
# openssl req -new -sha256 -key server.key -out server.csr<br />
# openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt<br />
<br />
{{Note | For more openssl options, read the [https://www.openssl.org/docs/apps/openssl.html man page] or peruse openssl's [https://www.openssl.org/docs/ extensive documentation].}}<br />
<br />
{{Warning|If you plan on implementing SSL/TLS, know that some variations and implementations are [https://weakdh.org/#affected still] [[wikipedia:Transport_Layer_Security#Attacks_against_TLS.2FSSL|vulnerable to attack]]. For details on these current vulnerabilities within SSL/TLS and how to apply appropriate changes to the web server, visit http://disablessl3.com/ and https://weakdh.org/sysadmin.html}}<br />
<br />
{{Tip|Mozilla has a useful [https://wiki.mozilla.org/Security/Server_Side_TLS SSL/TLS article] which includes [https://wiki.mozilla.org/Security/Server_Side_TLS#Apache Apache specific] configuration guidelines as well as an [https://mozilla.github.io/server-side-tls/ssl-config-generator/ automated tool] to help create a more secure configuration.}}<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
=== Virtual hosts ===<br />
<br />
{{Note|You will need to add a separate <VirtualHost dommainame:443> section for virtual host SSL support.<br />
See [[#Managing many virtual hosts]] for an example file.}}<br />
<br />
If you want to have more than one host, uncomment the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include conf/extra/httpd-vhosts.conf<br />
<br />
In {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}} set your virtual hosts. The default file contains an elaborate example that should help you get started.<br />
<br />
To test the virtual hosts on you local machine, add the virtual names to your {{ic|/etc/hosts}} file:<br />
127.0.0.1 domainname1.dom <br />
127.0.0.1 domainname2.dom<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
==== Managing many virtual hosts ====<br />
<br />
If you have a huge amount of virtual hosts, you may want to easily disable and enable them. It is recommended to create one configuration file per virtual host and store them all in one folder, eg: {{ic|/etc/httpd/conf/vhosts}}.<br />
<br />
First create the folder:<br />
# mkdir /etc/httpd/conf/vhosts<br />
<br />
Then place the single configuration files in it:<br />
# nano /etc/httpd/conf/vhosts/domainname1.dom<br />
# nano /etc/httpd/conf/vhosts/domainname2.dom<br />
...<br />
<br />
In the last step, {{ic|Include}} the single configurations in your {{ic|/etc/httpd/conf/httpd.conf}}:<br />
#Enabled Vhosts:<br />
Include conf/vhosts/domainname1.dom<br />
Include conf/vhosts/domainname2.dom<br />
<br />
You can enable and disable single virtual hosts by commenting or uncommenting them.<br />
<br />
A very basic vhost file will look like this:<br />
<br />
{{hc|/etc/httpd/conf/vhosts/domainname1.dom|<nowiki><br />
<VirtualHost domainname1.dom:80><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom<br />
ServerAlias domainname1.dom<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
</VirtualHost><br />
<br />
<VirtualHost domainname1.dom:443><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom:443<br />
ServerAlias domainname1.dom:443<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
<br />
SSLEngine on<br />
SSLCertificateFile "/etc/httpd/conf/apache.crt"<br />
SSLCertificateKeyFile "/etc/httpd/conf/apache.key"<br />
</VirtualHost></nowiki>}}<br />
<br />
== Extensions ==<br />
<br />
=== PHP ===<br />
To install [[PHP]], first [[install]] the {{Pkg|php}} and {{Pkg|php-apache}} packages.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, comment the line:<br />
LoadModule mpm_event_module modules/mod_mpm_event.so<br />
and uncomment the line:<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
<br />
{{Note|1=The above is required, because {{ic|libphp7.so}} included with {{pkg|php-apache}} does not work with {{ic|mod_mpm_event}}, but will only work {{ic|mod_mpm_prefork}} instead. ({{bug|39218}})<br />
<br />
Otherwise you will get the following error:<br />
{{bc|1=Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
AH00013: Pre-configuration failed<br />
httpd.service: control process exited, code=exited status=1}}<br />
<br />
As an alternative, you can use {{ic|mod_proxy_fcgi}} (see [[#Using php-fpm and mod_proxy_fcgi]] below).<br />
}}<br />
<br />
To enable PHP, add these lines to {{ic|/etc/httpd/conf/httpd.conf}}:<br />
*Place this in the {{ic|LoadModule}} list anywhere after {{ic|LoadModule dir_module modules/mod_dir.so}}:<br />
LoadModule php7_module modules/libphp7.so<br />
*Place this at the end of the {{ic|Include}} list:<br />
Include conf/extra/php7_module.conf<br />
<br />
Restart {{ic|httpd.service}} [[systemd#Using units|using systemd]]<br />
<br />
To test whether PHP was correctly configured: create a file called {{ic|test.php}} in your Apache {{ic|DocumentRoot}} directory (e.g. {{ic|/srv/http/}} or {{ic|~/public_html}}) with the following contents:<br />
<?php phpinfo(); ?><br />
To see if it works go to: http://localhost/test.php or http://localhost/~myname/test.php<br />
<br />
For advanced configuration and extensions, please read [[PHP]].<br />
<br />
==== Using php-fpm and mod_proxy_fcgi ====<br />
<br />
{{Note|Unlike the widespread setup with ProxyPass, the proxy configuration with SetHandler respects other Apache directives like DirectoryIndex. This ensures a better compatibility with software designed for libphp7, mod_fastcgi and mod_fcgid.<br />
If you still want to try ProxyPass, experiment with a line like this: {{bc|ProxyPassMatch ^/(.*\.php(/.*)?)$ unix:/run/php-fpm/php-fpm.sock&#124;fcgi://localhost/srv/http/$1}}}}<br />
<br />
[[Install]] the {{pkg|php-fpm}} package.<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fpm.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fpm.conf|<nowiki><br />
<FilesMatch \.php$><br />
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"<br />
</FilesMatch><br />
<Proxy "fcgi://localhost/" enablereuse=on max=10><br />
</Proxy><br />
<IfModule dir_module><br />
DirectoryIndex index.php index.html<br />
</IfModule><br />
</nowiki>}}<br />
<br />
And include it at the bottom of {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include conf/extra/php-fpm.conf<br />
<br />
{{Note|The pipe between {{ic|sock}} and {{ic|fcgi}} is not allowed to be surrounded by a space! {{ic|localhost}} can be replaced by any string but it should match in {{ic|SetHandler}} and {{ic|Proxy}} directives. More [https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html here]. {{ic|SetHandler}} and {{ic|Proxy}} can be used per vhost configs but the name after {{ic|fcgi://}} should differ for each vhost setup.}}<br />
<br />
You can configure PHP-FPM in {{ic|/etc/php/php-fpm.d/www.conf}}, but the default setup should work fine.<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}} and {{ic|php-fpm.service}}.<br />
<br />
==== Using apache2-mpm-worker and mod_fcgid ====<br />
[[Install]] the {{pkg|mod_fcgid}} and {{Pkg|php-cgi}} packages.<br />
<br />
Create the needed directory and symlink it for the PHP wrapper:<br />
# mkdir /srv/http/fcgid-bin<br />
# ln -s /usr/bin/php-cgi /srv/http/fcgid-bin/php-fcgid-wrapper<br />
<br />
Uncomment following in {{ic|/etc/conf.d/apache}}:<br />
HTTPD=/usr/bin/httpd.worker<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fcgid.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fcgid.conf|<nowiki><br />
# Required modules: fcgid_module<br />
<br />
<IfModule fcgid_module><br />
AddHandler php-fcgid .php<br />
AddType application/x-httpd-php .php<br />
Action php-fcgid /fcgid-bin/php-fcgid-wrapper<br />
ScriptAlias /fcgid-bin/ /srv/http/fcgid-bin/<br />
SocketPath /var/run/httpd/fcgidsock<br />
SharememPath /var/run/httpd/fcgid_shm<br />
# If you don't allow bigger requests many applications may fail (such as WordPress login)<br />
FcgidMaxRequestLen 536870912<br />
# Path to php.ini – defaults to /etc/phpX/cgi<br />
DefaultInitEnv PHPRC=/etc/php/<br />
# Number of PHP childs that will be launched. Leave undefined to let PHP decide.<br />
#DefaultInitEnv PHP_FCGI_CHILDREN 3<br />
# Maximum requests before a process is stopped and a new one is launched<br />
#DefaultInitEnv PHP_FCGI_MAX_REQUESTS 5000<br />
<Location /fcgid-bin/><br />
SetHandler fcgid-script<br />
Options +ExecCGI<br />
</Location><br />
</IfModule><br />
</nowiki>}}<br />
<br />
Edit {{ic|/etc/httpd/conf/httpd.conf}}, and add the following lines:<br />
LoadModule fcgid_module modules/mod_fcgid.so<br />
Include conf/extra/httpd-mpm.conf<br />
Include conf/extra/php-fcgid.conf<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}}.<br />
<br />
==== MySQL/MariaDB ====<br />
<br />
Follow the instructions in [[PHP#MySQL/MariaDB]].<br />
<br />
When configuration is complete, [[restart]] {{ic|httpd.service}} to apply all the changes.<br />
<br />
=== HTTP2 ===<br />
<br />
To enable HTTP/2 support, uncomment the following line in {{ic|httpd.conf}}:<br />
LoadModule http2_module modules/mod_http2.so<br />
<br />
And add the following line:<br />
Protocols h2 http/1.1<br />
<br />
For more information, see the [https://httpd.apache.org/docs/2.4/mod/mod_http2.html mod_http2] documentation.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Apache Status and Logs ===<br />
<br />
See the status of the Apache daemon with [[systemctl]].<br />
<br />
Apache logs can be found in {{ic|/var/log/httpd/}}<br />
<br />
=== Error: PID file /run/httpd/httpd.pid not readable (yet?) after start ===<br />
<br />
Comment out the unique_id_module: {{ic|#LoadModule unique_id_module modules/mod_unique_id.so}}<br />
<br />
=== Upgrading Apache to 2.4 from 2.2 ===<br />
<br />
If you use {{Pkg|php-apache}}, follow the instructions at [[#PHP]] above.<br />
<br />
Access Control has changed. Convert all {{ic|Order}}, {{ic|Allow}}, {{ic|Deny}} and {{ic|Satisfy}} directives to the new {{ic|Require}} syntax. [http://httpd.apache.org/docs/2.4/mod/mod_access_compat.html mod_access_compat] allows you to use the deprecated format during a transition phase.<br />
<br />
More information: [http://httpd.apache.org/docs/2.4/upgrading.html Upgrading to 2.4 from 2.2]<br />
<br />
=== Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. ===<br />
<br />
If when loading {{ic|php7_module}} the {{ic|httpd.service}} fails, and you get an error like this in the journal:<br />
<br />
Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
<br />
you need to replace {{ic|mpm_event_module}} with {{ic|mpm_prefork_module}}:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
<s>LoadModule mpm_event_module modules/mod_mpm_event.so</s><br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
=== AH00534: httpd: Configuration error: No MPM loaded. ===<br />
<br />
You might encounter this error after a recent upgrade. This is only the result of a recent change in {{ic|httpd.conf}} that you might not have reproduced in your local configuration.<br />
To fix it, uncomment the following line.<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
Also check [[#Apache_is_running_a_threaded_MPM.2C_but_your_PHP_Module_is_not_compiled_to_be_threadsafe.|the above]] if more errors occur afterwards.<br />
<br />
=== Changing the max_execution_time in php.ini has no effect ===<br />
<br />
If you changed the {{ic|max_execution_time}} in {{ic|php.ini}} to a value greater than 30 (seconds), you may still get a {{ic|503 Service Unavailable}} response from Apache after 30 seconds. To solve this, add a {{ic|ProxyTimeout}} directive to your http configuration right before the {{ic|<FilesMatch \.php$>}} block:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
ProxyTimeout 300<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
== See also ==<br />
<br />
* [http://www.apache.org/ Apache Official Website]<br />
* [http://www.akadia.com/services/ssh_test_certificate.html Tutorial for creating self-signed certificates]<br />
* [http://wiki.apache.org/httpd/CommonMisconfigurations Apache Wiki Troubleshooting]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Apache_HTTP_Server&diff=416560Apache HTTP Server2016-01-22T02:30:01Z<p>Wolfdogg: Undo revision 416559 by Wolfdogg (talk)</p>
<hr />
<div>[[Category:Web server]]<br />
[[cs:Apache HTTP Server]]<br />
[[de:LAMP Installation]]<br />
[[el:Apache HTTP Server]]<br />
[[es:Apache HTTP Server]]<br />
[[fr:Lamp]]<br />
[[it:Apache HTTP Server]]<br />
[[ja:LAMP]]<br />
[[pl:Apache HTTP Server]]<br />
[[ru:Apache HTTP Server]]<br />
[[sr:Apache HTTP Server]]<br />
[[tr:LAMP]]<br />
[[zh-cn:Apache HTTP Server]]<br />
{{Related articles start}}<br />
{{Related|PHP}}<br />
{{Related|MySQL}}<br />
{{Related|PhpMyAdmin}}<br />
{{Related|Adminer}}<br />
{{Related|Xampp}}<br />
{{Related|mod_perl}}<br />
{{Related articles end}}<br />
The [[Wikipedia:Apache HTTP Server|Apache HTTP Server]], or Apache for short, is a very popular web server, developed by the Apache Software Foundation.<br />
<br />
Apache is often used together with a scripting language such as PHP and database such as MySQL. This combination is often referred to as a [[Wikipedia:LAMP (software bundle)|LAMP]] stack ('''L'''inux, '''A'''pache, '''M'''ySQL, '''P'''HP). This article describes how to set up Apache and how to optionally integrate it with [[PHP]] and [[MySQL]].<br />
<br />
== Installation ==<br />
[[Install]] the {{Pkg|apache}} package.<br />
<br />
== Configuration ==<br />
Apache configuration files are located in {{ic|/etc/httpd/conf}}. The main configuration file is {{ic|/etc/httpd/conf/httpd.conf}}, which includes various other configuration files.<br />
The default configuration file should be fine for a simple setup. By default, it will serve the directory {{ic|/srv/http}} to anyone who visits your website.<br />
<br />
To start Apache, start {{ic|httpd.service}} [[systemd#Using units|using systemd]].<br />
<br />
Apache should now be running. Test by visiting http://localhost/ in a web browser. It should display a simple index page.<br />
<br />
For optional further configuration, see the following sections.<br />
<br />
=== Advanced options ===<br />
These options in {{ic|/etc/httpd/conf/httpd.conf}} might be interesting for you:<br />
<br />
User http<br />
:For security reasons, as soon as Apache is started by the root user (directly or via startup scripts) it switches to this UID. The default user is ''http'', which is created automatically during installation.<br />
<br />
Listen 80<br />
:This is the port Apache will listen to. For Internet-access with router, you have to forward the port.<br />
<br />
:If you want to setup Apache for local development you may want it to be only accessible from your computer. Then change this line to {{ic|Listen 127.0.0.1:80}}.<br />
<br />
ServerAdmin you@example.com<br />
:This is the admin's email address which can be found on e.g. error pages.<br />
<br />
DocumentRoot "/srv/http"<br />
:This is the directory where you should put your web pages.<br />
<br />
:Change it, if you want to, but do not forget to also change {{ic|<Directory "/srv/http">}} to whatever you changed your {{ic|DocumentRoot}} to, or you will likely get a '''403 Error''' (lack of privileges) when you try to access the new document root. Do not forget to change the {{ic|Require all denied}} line to {{ic|Require all granted}}, otherwise you will get a '''403 Error'''. Remember that the DocumentRoot directory and its parent folders must allow execution permission to others (can be set with {{ic|chmod o+x /path/to/DocumentRoot}}), otherwise you will get a '''403 Error'''.<br />
<br />
AllowOverride None<br />
:This directive in {{ic|<Directory>}} sections causes Apache to completely ignore {{ic|.htaccess}} files. Note that this is now the default for Apache 2.4, so you need to explicitly allow overrides if you plan to use {{ic|.htaccess}} files. If you intend to use {{ic|mod_rewrite}} or other settings in {{ic|.htaccess}} files, you can allow which directives declared in that file can override server configuration. For more info refer to the [http://httpd.apache.org/docs/current/mod/core.html#allowoverride Apache documentation].<br />
<br />
{{Tip|If you have issues with your configuration you can have Apache check the configuration with: {{ic|apachectl configtest}}}}<br />
<br />
More settings can be found in {{ic|/etc/httpd/conf/extra/httpd-default.conf}}:<br />
<br />
To turn off your server's signature:<br />
ServerSignature Off<br />
<br />
To hide server information like Apache and PHP versions:<br />
ServerTokens Prod<br />
<br />
=== User directories ===<br />
<br />
User directories are available by default through http://localhost/~yourusername/ and show the contents of {{ic|~/public_html}} (this can be changed in {{ic|/etc/httpd/conf/extra/httpd-userdir.conf}}).<br />
<br />
If you do not want user directories to be available on the web, comment out the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
<br />
Include conf/extra/httpd-userdir.conf<br />
<br />
{{Accuracy|It is not necessary to set {{ic|+x}} for every users, setting it only for the webserver via ACLs suffices (see [[Access Control Lists#Granting execution permissions for private files to a Web Server]]).}}<br />
<br />
You must make sure that your home directory permissions are set properly so that Apache can get there. Your home directory and {{ic|~/public_html}} must be executable for others ("rest of the world"):<br />
<br />
$ chmod o+x ~<br />
$ chmod o+x ~/public_html<br />
$ chmod -R o+r ~/public_html<br />
<br />
Restart {{ic|httpd.service}} to apply any changes. See also [[Umask#Set the mask value]].<br />
<br />
=== TLS/SSL ===<br />
{{pkg|openssl}} provides TLS/SSL support and is installed by default on Arch installations.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, uncomment the following three lines:<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
Include conf/extra/httpd-ssl.conf<br />
<br />
Create a private key and self-signed certificate. This is adequate for most installations that do not require a [[wikipedia:Certificate_signing_request|CSR]]:<br />
<br />
# cd /etc/httpd/conf<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt -days 1095<br />
# chmod 400 server.key<br />
# chmod 444 server.crt<br />
<br />
{{Note|The -days switch is optional and RSA keysize can be as low as 2048 (default).}}<br />
<br />
Then make sure the {{ic|SSLCertificateFile}} and {{ic|SSLCertificateKeyFile}} lines in {{ic|/etc/httpd/conf/extra/httpd-ssl.conf}} point to the key and certificate you have just created.<br />
<br />
If you need to create a [[wikipedia:Certificate signing request|CSR]], follow these keygen instructions instead of the above:<br />
<br />
# openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out server.key<br />
# chmod 400 server.key<br />
# openssl req -new -sha256 -key server.key -out server.csr<br />
# openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt<br />
<br />
{{Note | For more openssl options, read the [https://www.openssl.org/docs/apps/openssl.html man page] or peruse openssl's [https://www.openssl.org/docs/ extensive documentation].}}<br />
<br />
{{Warning|If you plan on implementing SSL/TLS, know that some variations and implementations are [https://weakdh.org/#affected still] [[wikipedia:Transport_Layer_Security#Attacks_against_TLS.2FSSL|vulnerable to attack]]. For details on these current vulnerabilities within SSL/TLS and how to apply appropriate changes to the web server, visit http://disablessl3.com/ and https://weakdh.org/sysadmin.html}}<br />
<br />
{{Tip|Mozilla has a useful [https://wiki.mozilla.org/Security/Server_Side_TLS SSL/TLS article] which includes [https://wiki.mozilla.org/Security/Server_Side_TLS#Apache Apache specific] configuration guidelines as well as an [https://mozilla.github.io/server-side-tls/ssl-config-generator/ automated tool] to help create a more secure configuration.}}<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
=== Virtual hosts ===<br />
<br />
{{Note|You will need to add a separate <VirtualHost dommainame:443> section for virtual host SSL support.<br />
See [[#Managing many virtual hosts]] for an example file.}}<br />
<br />
If you want to have more than one host, uncomment the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include conf/extra/httpd-vhosts.conf<br />
<br />
In {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}} set your virtual hosts. The default file contains an elaborate example that should help you get started.<br />
<br />
To test the virtual hosts on you local machine, add the virtual names to your {{ic|/etc/hosts}} file:<br />
127.0.0.1 domainname1.dom <br />
127.0.0.1 domainname2.dom<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
==== Managing many virtual hosts ====<br />
<br />
If you have a huge amount of virtual hosts, you may want to easily disable and enable them. It is recommended to create one configuration file per virtual host and store them all in one folder, eg: {{ic|/etc/httpd/conf/vhosts}}.<br />
<br />
First create the folder:<br />
# mkdir /etc/httpd/conf/vhosts<br />
<br />
Then place the single configuration files in it:<br />
# nano /etc/httpd/conf/vhosts/domainname1.dom<br />
# nano /etc/httpd/conf/vhosts/domainname2.dom<br />
...<br />
<br />
In the last step, {{ic|Include}} the single configurations in your {{ic|/etc/httpd/conf/httpd.conf}}:<br />
#Enabled Vhosts:<br />
Include conf/vhosts/domainname1.dom<br />
Include conf/vhosts/domainname2.dom<br />
<br />
You can enable and disable single virtual hosts by commenting or uncommenting them.<br />
<br />
A very basic vhost file will look like this:<br />
<br />
{{hc|/etc/httpd/conf/vhosts/domainname1.dom|<nowiki><br />
<VirtualHost domainname1.dom:80><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom<br />
ServerAlias domainname1.dom<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
</VirtualHost><br />
<br />
<VirtualHost domainname1.dom:443><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom:443<br />
ServerAlias domainname1.dom:443<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
<br />
SSLEngine on<br />
SSLCertificateFile "/etc/httpd/conf/apache.crt"<br />
SSLCertificateKeyFile "/etc/httpd/conf/apache.key"<br />
</VirtualHost></nowiki>}}<br />
<br />
== Extensions ==<br />
<br />
=== PHP ===<br />
To install [[PHP]], first [[install]] the {{Pkg|php}} and {{Pkg|php-apache}} packages.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, comment the line:<br />
LoadModule mpm_event_module modules/mod_mpm_event.so<br />
and uncomment the line:<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
<br />
{{Note|1=The above is required, because {{ic|libphp7.so}} included with {{pkg|php-apache}} does not work with {{ic|mod_mpm_event}}, but will only work {{ic|mod_mpm_prefork}} instead. ({{bug|39218}})<br />
<br />
Otherwise you will get the following error:<br />
{{bc|1=Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
AH00013: Pre-configuration failed<br />
httpd.service: control process exited, code=exited status=1}}<br />
<br />
As an alternative, you can use {{ic|mod_proxy_fcgi}} (see [[#Using php-fpm and mod_proxy_fcgi]] below).<br />
}}<br />
<br />
To enable PHP, add these lines to {{ic|/etc/httpd/conf/httpd.conf}}:<br />
*Place this in the {{ic|LoadModule}} list anywhere after {{ic|LoadModule dir_module modules/mod_dir.so}}:<br />
LoadModule php7_module modules/libphp7.so<br />
*Place this at the end of the {{ic|Include}} list:<br />
Include conf/extra/php7_module.conf<br />
<br />
Restart {{ic|httpd.service}} [[systemd#Using units|using systemd]]<br />
<br />
To test whether PHP was correctly configured: create a file called {{ic|test.php}} in your Apache {{ic|DocumentRoot}} directory (e.g. {{ic|/srv/http/}} or {{ic|~/public_html}}) with the following contents:<br />
<?php phpinfo(); ?><br />
To see if it works go to: http://localhost/test.php or http://localhost/~myname/test.php<br />
<br />
For advanced configuration and extensions, please read [[PHP]].<br />
<br />
==== Using php-fpm and mod_proxy_fcgi ====<br />
<br />
{{Note|Unlike the widespread setup with ProxyPass, the proxy configuration with SetHandler respects other Apache directives like DirectoryIndex. This ensures a better compatibility with software designed for libphp7, mod_fastcgi and mod_fcgid.<br />
If you still want to try ProxyPass, experiment with a line like this: {{bc|ProxyPassMatch ^/(.*\.php(/.*)?)$ unix:/run/php-fpm/php-fpm.sock&#124;fcgi://localhost/srv/http/$1}}}}<br />
<br />
[[Install]] the {{pkg|php-fpm}} package.<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fpm.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fpm.conf|<nowiki><br />
<FilesMatch \.php$><br />
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"<br />
</FilesMatch><br />
<Proxy "fcgi://localhost/" enablereuse=on max=10><br />
</Proxy><br />
<IfModule dir_module><br />
DirectoryIndex index.php index.html<br />
</IfModule><br />
</nowiki>}}<br />
<br />
And include it at the bottom of {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include /etc/php/conf/php-fpm.conf<br />
<br />
{{Note|The pipe between {{ic|sock}} and {{ic|fcgi}} is not allowed to be surrounded by a space! {{ic|localhost}} can be replaced by any string but it should match in {{ic|SetHandler}} and {{ic|Proxy}} directives. More [https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html here]. {{ic|SetHandler}} and {{ic|Proxy}} can be used per vhost configs but the name after {{ic|fcgi://}} should differ for each vhost setup.}}<br />
<br />
You can configure PHP-FPM in {{ic|/etc/php/php-fpm.d/www.conf}}, but the default setup should work fine.<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}} and {{ic|php-fpm.service}}.<br />
<br />
==== Using apache2-mpm-worker and mod_fcgid ====<br />
[[Install]] the {{pkg|mod_fcgid}} and {{Pkg|php-cgi}} packages.<br />
<br />
Create the needed directory and symlink it for the PHP wrapper:<br />
# mkdir /srv/http/fcgid-bin<br />
# ln -s /usr/bin/php-cgi /srv/http/fcgid-bin/php-fcgid-wrapper<br />
<br />
Uncomment following in {{ic|/etc/conf.d/apache}}:<br />
HTTPD=/usr/bin/httpd.worker<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fcgid.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fcgid.conf|<nowiki><br />
# Required modules: fcgid_module<br />
<br />
<IfModule fcgid_module><br />
AddHandler php-fcgid .php<br />
AddType application/x-httpd-php .php<br />
Action php-fcgid /fcgid-bin/php-fcgid-wrapper<br />
ScriptAlias /fcgid-bin/ /srv/http/fcgid-bin/<br />
SocketPath /var/run/httpd/fcgidsock<br />
SharememPath /var/run/httpd/fcgid_shm<br />
# If you don't allow bigger requests many applications may fail (such as WordPress login)<br />
FcgidMaxRequestLen 536870912<br />
# Path to php.ini – defaults to /etc/phpX/cgi<br />
DefaultInitEnv PHPRC=/etc/php/<br />
# Number of PHP childs that will be launched. Leave undefined to let PHP decide.<br />
#DefaultInitEnv PHP_FCGI_CHILDREN 3<br />
# Maximum requests before a process is stopped and a new one is launched<br />
#DefaultInitEnv PHP_FCGI_MAX_REQUESTS 5000<br />
<Location /fcgid-bin/><br />
SetHandler fcgid-script<br />
Options +ExecCGI<br />
</Location><br />
</IfModule><br />
</nowiki>}}<br />
<br />
Edit {{ic|/etc/httpd/conf/httpd.conf}}, and add the following lines:<br />
LoadModule fcgid_module modules/mod_fcgid.so<br />
Include conf/extra/httpd-mpm.conf<br />
Include conf/extra/php-fcgid.conf<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}}.<br />
<br />
==== MySQL/MariaDB ====<br />
<br />
Follow the instructions in [[PHP#MySQL/MariaDB]].<br />
<br />
When configuration is complete, [[restart]] {{ic|httpd.service}} to apply all the changes.<br />
<br />
=== HTTP2 ===<br />
<br />
To enable HTTP/2 support, uncomment the following line in {{ic|httpd.conf}}:<br />
LoadModule http2_module modules/mod_http2.so<br />
<br />
And add the following line:<br />
Protocols h2 http/1.1<br />
<br />
For more information, see the [https://httpd.apache.org/docs/2.4/mod/mod_http2.html mod_http2] documentation.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Apache Status and Logs ===<br />
<br />
See the status of the Apache daemon with [[systemctl]].<br />
<br />
Apache logs can be found in {{ic|/var/log/httpd/}}<br />
<br />
=== Error: PID file /run/httpd/httpd.pid not readable (yet?) after start ===<br />
<br />
Comment out the unique_id_module: {{ic|#LoadModule unique_id_module modules/mod_unique_id.so}}<br />
<br />
=== Upgrading Apache to 2.4 from 2.2 ===<br />
<br />
If you use {{Pkg|php-apache}}, follow the instructions at [[#PHP]] above.<br />
<br />
Access Control has changed. Convert all {{ic|Order}}, {{ic|Allow}}, {{ic|Deny}} and {{ic|Satisfy}} directives to the new {{ic|Require}} syntax. [http://httpd.apache.org/docs/2.4/mod/mod_access_compat.html mod_access_compat] allows you to use the deprecated format during a transition phase.<br />
<br />
More information: [http://httpd.apache.org/docs/2.4/upgrading.html Upgrading to 2.4 from 2.2]<br />
<br />
=== Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. ===<br />
<br />
If when loading {{ic|php7_module}} the {{ic|httpd.service}} fails, and you get an error like this in the journal:<br />
<br />
Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
<br />
you need to replace {{ic|mpm_event_module}} with {{ic|mpm_prefork_module}}:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
<s>LoadModule mpm_event_module modules/mod_mpm_event.so</s><br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
=== AH00534: httpd: Configuration error: No MPM loaded. ===<br />
<br />
You might encounter this error after a recent upgrade. This is only the result of a recent change in {{ic|httpd.conf}} that you might not have reproduced in your local configuration.<br />
To fix it, uncomment the following line.<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
Also check [[#Apache_is_running_a_threaded_MPM.2C_but_your_PHP_Module_is_not_compiled_to_be_threadsafe.|the above]] if more errors occur afterwards.<br />
<br />
=== Changing the max_execution_time in php.ini has no effect ===<br />
<br />
If you changed the {{ic|max_execution_time}} in {{ic|php.ini}} to a value greater than 30 (seconds), you may still get a {{ic|503 Service Unavailable}} response from Apache after 30 seconds. To solve this, add a {{ic|ProxyTimeout}} directive to your http configuration right before the {{ic|<FilesMatch \.php$>}} block:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
ProxyTimeout 300<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
== See also ==<br />
<br />
* [http://www.apache.org/ Apache Official Website]<br />
* [http://www.akadia.com/services/ssh_test_certificate.html Tutorial for creating self-signed certificates]<br />
* [http://wiki.apache.org/httpd/CommonMisconfigurations Apache Wiki Troubleshooting]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Apache_HTTP_Server&diff=416559Apache HTTP Server2016-01-22T02:25:11Z<p>Wolfdogg: /* Using php-fpm and mod_proxy_fcgi */</p>
<hr />
<div>[[Category:Web server]]<br />
[[cs:Apache HTTP Server]]<br />
[[de:LAMP Installation]]<br />
[[el:Apache HTTP Server]]<br />
[[es:Apache HTTP Server]]<br />
[[fr:Lamp]]<br />
[[it:Apache HTTP Server]]<br />
[[ja:LAMP]]<br />
[[pl:Apache HTTP Server]]<br />
[[ru:Apache HTTP Server]]<br />
[[sr:Apache HTTP Server]]<br />
[[tr:LAMP]]<br />
[[zh-cn:Apache HTTP Server]]<br />
{{Related articles start}}<br />
{{Related|PHP}}<br />
{{Related|MySQL}}<br />
{{Related|PhpMyAdmin}}<br />
{{Related|Adminer}}<br />
{{Related|Xampp}}<br />
{{Related|mod_perl}}<br />
{{Related articles end}}<br />
The [[Wikipedia:Apache HTTP Server|Apache HTTP Server]], or Apache for short, is a very popular web server, developed by the Apache Software Foundation.<br />
<br />
Apache is often used together with a scripting language such as PHP and database such as MySQL. This combination is often referred to as a [[Wikipedia:LAMP (software bundle)|LAMP]] stack ('''L'''inux, '''A'''pache, '''M'''ySQL, '''P'''HP). This article describes how to set up Apache and how to optionally integrate it with [[PHP]] and [[MySQL]].<br />
<br />
== Installation ==<br />
[[Install]] the {{Pkg|apache}} package.<br />
<br />
== Configuration ==<br />
Apache configuration files are located in {{ic|/etc/httpd/conf}}. The main configuration file is {{ic|/etc/httpd/conf/httpd.conf}}, which includes various other configuration files.<br />
The default configuration file should be fine for a simple setup. By default, it will serve the directory {{ic|/srv/http}} to anyone who visits your website.<br />
<br />
To start Apache, start {{ic|httpd.service}} [[systemd#Using units|using systemd]].<br />
<br />
Apache should now be running. Test by visiting http://localhost/ in a web browser. It should display a simple index page.<br />
<br />
For optional further configuration, see the following sections.<br />
<br />
=== Advanced options ===<br />
These options in {{ic|/etc/httpd/conf/httpd.conf}} might be interesting for you:<br />
<br />
User http<br />
:For security reasons, as soon as Apache is started by the root user (directly or via startup scripts) it switches to this UID. The default user is ''http'', which is created automatically during installation.<br />
<br />
Listen 80<br />
:This is the port Apache will listen to. For Internet-access with router, you have to forward the port.<br />
<br />
:If you want to setup Apache for local development you may want it to be only accessible from your computer. Then change this line to {{ic|Listen 127.0.0.1:80}}.<br />
<br />
ServerAdmin you@example.com<br />
:This is the admin's email address which can be found on e.g. error pages.<br />
<br />
DocumentRoot "/srv/http"<br />
:This is the directory where you should put your web pages.<br />
<br />
:Change it, if you want to, but do not forget to also change {{ic|<Directory "/srv/http">}} to whatever you changed your {{ic|DocumentRoot}} to, or you will likely get a '''403 Error''' (lack of privileges) when you try to access the new document root. Do not forget to change the {{ic|Require all denied}} line to {{ic|Require all granted}}, otherwise you will get a '''403 Error'''. Remember that the DocumentRoot directory and its parent folders must allow execution permission to others (can be set with {{ic|chmod o+x /path/to/DocumentRoot}}), otherwise you will get a '''403 Error'''.<br />
<br />
AllowOverride None<br />
:This directive in {{ic|<Directory>}} sections causes Apache to completely ignore {{ic|.htaccess}} files. Note that this is now the default for Apache 2.4, so you need to explicitly allow overrides if you plan to use {{ic|.htaccess}} files. If you intend to use {{ic|mod_rewrite}} or other settings in {{ic|.htaccess}} files, you can allow which directives declared in that file can override server configuration. For more info refer to the [http://httpd.apache.org/docs/current/mod/core.html#allowoverride Apache documentation].<br />
<br />
{{Tip|If you have issues with your configuration you can have Apache check the configuration with: {{ic|apachectl configtest}}}}<br />
<br />
More settings can be found in {{ic|/etc/httpd/conf/extra/httpd-default.conf}}:<br />
<br />
To turn off your server's signature:<br />
ServerSignature Off<br />
<br />
To hide server information like Apache and PHP versions:<br />
ServerTokens Prod<br />
<br />
=== User directories ===<br />
<br />
User directories are available by default through http://localhost/~yourusername/ and show the contents of {{ic|~/public_html}} (this can be changed in {{ic|/etc/httpd/conf/extra/httpd-userdir.conf}}).<br />
<br />
If you do not want user directories to be available on the web, comment out the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
<br />
Include conf/extra/httpd-userdir.conf<br />
<br />
{{Accuracy|It is not necessary to set {{ic|+x}} for every users, setting it only for the webserver via ACLs suffices (see [[Access Control Lists#Granting execution permissions for private files to a Web Server]]).}}<br />
<br />
You must make sure that your home directory permissions are set properly so that Apache can get there. Your home directory and {{ic|~/public_html}} must be executable for others ("rest of the world"):<br />
<br />
$ chmod o+x ~<br />
$ chmod o+x ~/public_html<br />
$ chmod -R o+r ~/public_html<br />
<br />
Restart {{ic|httpd.service}} to apply any changes. See also [[Umask#Set the mask value]].<br />
<br />
=== TLS/SSL ===<br />
{{pkg|openssl}} provides TLS/SSL support and is installed by default on Arch installations.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, uncomment the following three lines:<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
Include conf/extra/httpd-ssl.conf<br />
<br />
Create a private key and self-signed certificate. This is adequate for most installations that do not require a [[wikipedia:Certificate_signing_request|CSR]]:<br />
<br />
# cd /etc/httpd/conf<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt -days 1095<br />
# chmod 400 server.key<br />
# chmod 444 server.crt<br />
<br />
{{Note|The -days switch is optional and RSA keysize can be as low as 2048 (default).}}<br />
<br />
Then make sure the {{ic|SSLCertificateFile}} and {{ic|SSLCertificateKeyFile}} lines in {{ic|/etc/httpd/conf/extra/httpd-ssl.conf}} point to the key and certificate you have just created.<br />
<br />
If you need to create a [[wikipedia:Certificate signing request|CSR]], follow these keygen instructions instead of the above:<br />
<br />
# openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out server.key<br />
# chmod 400 server.key<br />
# openssl req -new -sha256 -key server.key -out server.csr<br />
# openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt<br />
<br />
{{Note | For more openssl options, read the [https://www.openssl.org/docs/apps/openssl.html man page] or peruse openssl's [https://www.openssl.org/docs/ extensive documentation].}}<br />
<br />
{{Warning|If you plan on implementing SSL/TLS, know that some variations and implementations are [https://weakdh.org/#affected still] [[wikipedia:Transport_Layer_Security#Attacks_against_TLS.2FSSL|vulnerable to attack]]. For details on these current vulnerabilities within SSL/TLS and how to apply appropriate changes to the web server, visit http://disablessl3.com/ and https://weakdh.org/sysadmin.html}}<br />
<br />
{{Tip|Mozilla has a useful [https://wiki.mozilla.org/Security/Server_Side_TLS SSL/TLS article] which includes [https://wiki.mozilla.org/Security/Server_Side_TLS#Apache Apache specific] configuration guidelines as well as an [https://mozilla.github.io/server-side-tls/ssl-config-generator/ automated tool] to help create a more secure configuration.}}<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
=== Virtual hosts ===<br />
<br />
{{Note|You will need to add a separate <VirtualHost dommainame:443> section for virtual host SSL support.<br />
See [[#Managing many virtual hosts]] for an example file.}}<br />
<br />
If you want to have more than one host, uncomment the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include conf/extra/httpd-vhosts.conf<br />
<br />
In {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}} set your virtual hosts. The default file contains an elaborate example that should help you get started.<br />
<br />
To test the virtual hosts on you local machine, add the virtual names to your {{ic|/etc/hosts}} file:<br />
127.0.0.1 domainname1.dom <br />
127.0.0.1 domainname2.dom<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
==== Managing many virtual hosts ====<br />
<br />
If you have a huge amount of virtual hosts, you may want to easily disable and enable them. It is recommended to create one configuration file per virtual host and store them all in one folder, eg: {{ic|/etc/httpd/conf/vhosts}}.<br />
<br />
First create the folder:<br />
# mkdir /etc/httpd/conf/vhosts<br />
<br />
Then place the single configuration files in it:<br />
# nano /etc/httpd/conf/vhosts/domainname1.dom<br />
# nano /etc/httpd/conf/vhosts/domainname2.dom<br />
...<br />
<br />
In the last step, {{ic|Include}} the single configurations in your {{ic|/etc/httpd/conf/httpd.conf}}:<br />
#Enabled Vhosts:<br />
Include conf/vhosts/domainname1.dom<br />
Include conf/vhosts/domainname2.dom<br />
<br />
You can enable and disable single virtual hosts by commenting or uncommenting them.<br />
<br />
A very basic vhost file will look like this:<br />
<br />
{{hc|/etc/httpd/conf/vhosts/domainname1.dom|<nowiki><br />
<VirtualHost domainname1.dom:80><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom<br />
ServerAlias domainname1.dom<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
</VirtualHost><br />
<br />
<VirtualHost domainname1.dom:443><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom:443<br />
ServerAlias domainname1.dom:443<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
<br />
SSLEngine on<br />
SSLCertificateFile "/etc/httpd/conf/apache.crt"<br />
SSLCertificateKeyFile "/etc/httpd/conf/apache.key"<br />
</VirtualHost></nowiki>}}<br />
<br />
== Extensions ==<br />
<br />
=== PHP ===<br />
To install [[PHP]], first [[install]] the {{Pkg|php}} and {{Pkg|php-apache}} packages.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, comment the line:<br />
LoadModule mpm_event_module modules/mod_mpm_event.so<br />
and uncomment the line:<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
<br />
{{Note|1=The above is required, because {{ic|libphp7.so}} included with {{pkg|php-apache}} does not work with {{ic|mod_mpm_event}}, but will only work {{ic|mod_mpm_prefork}} instead. ({{bug|39218}})<br />
<br />
Otherwise you will get the following error:<br />
{{bc|1=Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
AH00013: Pre-configuration failed<br />
httpd.service: control process exited, code=exited status=1}}<br />
<br />
As an alternative, you can use {{ic|mod_proxy_fcgi}} (see [[#Using php-fpm and mod_proxy_fcgi]] below).<br />
}}<br />
<br />
To enable PHP, add these lines to {{ic|/etc/httpd/conf/httpd.conf}}:<br />
*Place this in the {{ic|LoadModule}} list anywhere after {{ic|LoadModule dir_module modules/mod_dir.so}}:<br />
LoadModule php7_module modules/libphp7.so<br />
*Place this at the end of the {{ic|Include}} list:<br />
Include conf/extra/php7_module.conf<br />
<br />
Restart {{ic|httpd.service}} [[systemd#Using units|using systemd]]<br />
<br />
To test whether PHP was correctly configured: create a file called {{ic|test.php}} in your Apache {{ic|DocumentRoot}} directory (e.g. {{ic|/srv/http/}} or {{ic|~/public_html}}) with the following contents:<br />
<?php phpinfo(); ?><br />
To see if it works go to: http://localhost/test.php or http://localhost/~myname/test.php<br />
<br />
For advanced configuration and extensions, please read [[PHP]].<br />
<br />
==== Using php-fpm and mod_proxy_fcgi ====<br />
<br />
{{Note|Unlike the widespread setup with ProxyPass, the proxy configuration with SetHandler respects other Apache directives like DirectoryIndex. This ensures a better compatibility with software designed for libphp7, mod_fastcgi and mod_fcgid.<br />
If you still want to try ProxyPass, experiment with a line like this: {{bc|ProxyPassMatch ^/(.*\.php(/.*)?)$ unix:/run/php-fpm/php-fpm.sock&#124;fcgi://localhost/srv/http/$1}}}}<br />
<br />
[[Install]] the {{pkg|php-fpm}} package.<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fpm.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fpm.conf|<nowiki><br />
<FilesMatch \.php$><br />
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"<br />
</FilesMatch><br />
<Proxy "fcgi://localhost/" enablereuse=on max=10><br />
</Proxy><br />
<IfModule dir_module><br />
DirectoryIndex index.php index.html<br />
</IfModule><br />
</nowiki>}}<br />
<br />
And include it at the bottom of {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include /etc/php/php-fpm.conf<br />
<br />
{{Note|The pipe between {{ic|sock}} and {{ic|fcgi}} is not allowed to be surrounded by a space! {{ic|localhost}} can be replaced by any string but it should match in {{ic|SetHandler}} and {{ic|Proxy}} directives. More [https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html here]. {{ic|SetHandler}} and {{ic|Proxy}} can be used per vhost configs but the name after {{ic|fcgi://}} should differ for each vhost setup.}}<br />
<br />
You can configure PHP-FPM in {{ic|/etc/php/php-fpm.d/www.conf}}, but the default setup should work fine.<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}} and {{ic|php-fpm.service}}.<br />
<br />
==== Using apache2-mpm-worker and mod_fcgid ====<br />
[[Install]] the {{pkg|mod_fcgid}} and {{Pkg|php-cgi}} packages.<br />
<br />
Create the needed directory and symlink it for the PHP wrapper:<br />
# mkdir /srv/http/fcgid-bin<br />
# ln -s /usr/bin/php-cgi /srv/http/fcgid-bin/php-fcgid-wrapper<br />
<br />
Uncomment following in {{ic|/etc/conf.d/apache}}:<br />
HTTPD=/usr/bin/httpd.worker<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fcgid.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fcgid.conf|<nowiki><br />
# Required modules: fcgid_module<br />
<br />
<IfModule fcgid_module><br />
AddHandler php-fcgid .php<br />
AddType application/x-httpd-php .php<br />
Action php-fcgid /fcgid-bin/php-fcgid-wrapper<br />
ScriptAlias /fcgid-bin/ /srv/http/fcgid-bin/<br />
SocketPath /var/run/httpd/fcgidsock<br />
SharememPath /var/run/httpd/fcgid_shm<br />
# If you don't allow bigger requests many applications may fail (such as WordPress login)<br />
FcgidMaxRequestLen 536870912<br />
# Path to php.ini – defaults to /etc/phpX/cgi<br />
DefaultInitEnv PHPRC=/etc/php/<br />
# Number of PHP childs that will be launched. Leave undefined to let PHP decide.<br />
#DefaultInitEnv PHP_FCGI_CHILDREN 3<br />
# Maximum requests before a process is stopped and a new one is launched<br />
#DefaultInitEnv PHP_FCGI_MAX_REQUESTS 5000<br />
<Location /fcgid-bin/><br />
SetHandler fcgid-script<br />
Options +ExecCGI<br />
</Location><br />
</IfModule><br />
</nowiki>}}<br />
<br />
Edit {{ic|/etc/httpd/conf/httpd.conf}}, and add the following lines:<br />
LoadModule fcgid_module modules/mod_fcgid.so<br />
Include conf/extra/httpd-mpm.conf<br />
Include conf/extra/php-fcgid.conf<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}}.<br />
<br />
==== MySQL/MariaDB ====<br />
<br />
Follow the instructions in [[PHP#MySQL/MariaDB]].<br />
<br />
When configuration is complete, [[restart]] {{ic|httpd.service}} to apply all the changes.<br />
<br />
=== HTTP2 ===<br />
<br />
To enable HTTP/2 support, uncomment the following line in {{ic|httpd.conf}}:<br />
LoadModule http2_module modules/mod_http2.so<br />
<br />
And add the following line:<br />
Protocols h2 http/1.1<br />
<br />
For more information, see the [https://httpd.apache.org/docs/2.4/mod/mod_http2.html mod_http2] documentation.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Apache Status and Logs ===<br />
<br />
See the status of the Apache daemon with [[systemctl]].<br />
<br />
Apache logs can be found in {{ic|/var/log/httpd/}}<br />
<br />
=== Error: PID file /run/httpd/httpd.pid not readable (yet?) after start ===<br />
<br />
Comment out the unique_id_module: {{ic|#LoadModule unique_id_module modules/mod_unique_id.so}}<br />
<br />
=== Upgrading Apache to 2.4 from 2.2 ===<br />
<br />
If you use {{Pkg|php-apache}}, follow the instructions at [[#PHP]] above.<br />
<br />
Access Control has changed. Convert all {{ic|Order}}, {{ic|Allow}}, {{ic|Deny}} and {{ic|Satisfy}} directives to the new {{ic|Require}} syntax. [http://httpd.apache.org/docs/2.4/mod/mod_access_compat.html mod_access_compat] allows you to use the deprecated format during a transition phase.<br />
<br />
More information: [http://httpd.apache.org/docs/2.4/upgrading.html Upgrading to 2.4 from 2.2]<br />
<br />
=== Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. ===<br />
<br />
If when loading {{ic|php7_module}} the {{ic|httpd.service}} fails, and you get an error like this in the journal:<br />
<br />
Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
<br />
you need to replace {{ic|mpm_event_module}} with {{ic|mpm_prefork_module}}:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
<s>LoadModule mpm_event_module modules/mod_mpm_event.so</s><br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
=== AH00534: httpd: Configuration error: No MPM loaded. ===<br />
<br />
You might encounter this error after a recent upgrade. This is only the result of a recent change in {{ic|httpd.conf}} that you might not have reproduced in your local configuration.<br />
To fix it, uncomment the following line.<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
Also check [[#Apache_is_running_a_threaded_MPM.2C_but_your_PHP_Module_is_not_compiled_to_be_threadsafe.|the above]] if more errors occur afterwards.<br />
<br />
=== Changing the max_execution_time in php.ini has no effect ===<br />
<br />
If you changed the {{ic|max_execution_time}} in {{ic|php.ini}} to a value greater than 30 (seconds), you may still get a {{ic|503 Service Unavailable}} response from Apache after 30 seconds. To solve this, add a {{ic|ProxyTimeout}} directive to your http configuration right before the {{ic|<FilesMatch \.php$>}} block:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
ProxyTimeout 300<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
== See also ==<br />
<br />
* [http://www.apache.org/ Apache Official Website]<br />
* [http://www.akadia.com/services/ssh_test_certificate.html Tutorial for creating self-signed certificates]<br />
* [http://wiki.apache.org/httpd/CommonMisconfigurations Apache Wiki Troubleshooting]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Apache_HTTP_Server&diff=416558Apache HTTP Server2016-01-22T02:24:17Z<p>Wolfdogg: wrong location</p>
<hr />
<div>[[Category:Web server]]<br />
[[cs:Apache HTTP Server]]<br />
[[de:LAMP Installation]]<br />
[[el:Apache HTTP Server]]<br />
[[es:Apache HTTP Server]]<br />
[[fr:Lamp]]<br />
[[it:Apache HTTP Server]]<br />
[[ja:LAMP]]<br />
[[pl:Apache HTTP Server]]<br />
[[ru:Apache HTTP Server]]<br />
[[sr:Apache HTTP Server]]<br />
[[tr:LAMP]]<br />
[[zh-cn:Apache HTTP Server]]<br />
{{Related articles start}}<br />
{{Related|PHP}}<br />
{{Related|MySQL}}<br />
{{Related|PhpMyAdmin}}<br />
{{Related|Adminer}}<br />
{{Related|Xampp}}<br />
{{Related|mod_perl}}<br />
{{Related articles end}}<br />
The [[Wikipedia:Apache HTTP Server|Apache HTTP Server]], or Apache for short, is a very popular web server, developed by the Apache Software Foundation.<br />
<br />
Apache is often used together with a scripting language such as PHP and database such as MySQL. This combination is often referred to as a [[Wikipedia:LAMP (software bundle)|LAMP]] stack ('''L'''inux, '''A'''pache, '''M'''ySQL, '''P'''HP). This article describes how to set up Apache and how to optionally integrate it with [[PHP]] and [[MySQL]].<br />
<br />
== Installation ==<br />
[[Install]] the {{Pkg|apache}} package.<br />
<br />
== Configuration ==<br />
Apache configuration files are located in {{ic|/etc/httpd/conf}}. The main configuration file is {{ic|/etc/httpd/conf/httpd.conf}}, which includes various other configuration files.<br />
The default configuration file should be fine for a simple setup. By default, it will serve the directory {{ic|/srv/http}} to anyone who visits your website.<br />
<br />
To start Apache, start {{ic|httpd.service}} [[systemd#Using units|using systemd]].<br />
<br />
Apache should now be running. Test by visiting http://localhost/ in a web browser. It should display a simple index page.<br />
<br />
For optional further configuration, see the following sections.<br />
<br />
=== Advanced options ===<br />
These options in {{ic|/etc/httpd/conf/httpd.conf}} might be interesting for you:<br />
<br />
User http<br />
:For security reasons, as soon as Apache is started by the root user (directly or via startup scripts) it switches to this UID. The default user is ''http'', which is created automatically during installation.<br />
<br />
Listen 80<br />
:This is the port Apache will listen to. For Internet-access with router, you have to forward the port.<br />
<br />
:If you want to setup Apache for local development you may want it to be only accessible from your computer. Then change this line to {{ic|Listen 127.0.0.1:80}}.<br />
<br />
ServerAdmin you@example.com<br />
:This is the admin's email address which can be found on e.g. error pages.<br />
<br />
DocumentRoot "/srv/http"<br />
:This is the directory where you should put your web pages.<br />
<br />
:Change it, if you want to, but do not forget to also change {{ic|<Directory "/srv/http">}} to whatever you changed your {{ic|DocumentRoot}} to, or you will likely get a '''403 Error''' (lack of privileges) when you try to access the new document root. Do not forget to change the {{ic|Require all denied}} line to {{ic|Require all granted}}, otherwise you will get a '''403 Error'''. Remember that the DocumentRoot directory and its parent folders must allow execution permission to others (can be set with {{ic|chmod o+x /path/to/DocumentRoot}}), otherwise you will get a '''403 Error'''.<br />
<br />
AllowOverride None<br />
:This directive in {{ic|<Directory>}} sections causes Apache to completely ignore {{ic|.htaccess}} files. Note that this is now the default for Apache 2.4, so you need to explicitly allow overrides if you plan to use {{ic|.htaccess}} files. If you intend to use {{ic|mod_rewrite}} or other settings in {{ic|.htaccess}} files, you can allow which directives declared in that file can override server configuration. For more info refer to the [http://httpd.apache.org/docs/current/mod/core.html#allowoverride Apache documentation].<br />
<br />
{{Tip|If you have issues with your configuration you can have Apache check the configuration with: {{ic|apachectl configtest}}}}<br />
<br />
More settings can be found in {{ic|/etc/httpd/conf/extra/httpd-default.conf}}:<br />
<br />
To turn off your server's signature:<br />
ServerSignature Off<br />
<br />
To hide server information like Apache and PHP versions:<br />
ServerTokens Prod<br />
<br />
=== User directories ===<br />
<br />
User directories are available by default through http://localhost/~yourusername/ and show the contents of {{ic|~/public_html}} (this can be changed in {{ic|/etc/httpd/conf/extra/httpd-userdir.conf}}).<br />
<br />
If you do not want user directories to be available on the web, comment out the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
<br />
Include conf/extra/httpd-userdir.conf<br />
<br />
{{Accuracy|It is not necessary to set {{ic|+x}} for every users, setting it only for the webserver via ACLs suffices (see [[Access Control Lists#Granting execution permissions for private files to a Web Server]]).}}<br />
<br />
You must make sure that your home directory permissions are set properly so that Apache can get there. Your home directory and {{ic|~/public_html}} must be executable for others ("rest of the world"):<br />
<br />
$ chmod o+x ~<br />
$ chmod o+x ~/public_html<br />
$ chmod -R o+r ~/public_html<br />
<br />
Restart {{ic|httpd.service}} to apply any changes. See also [[Umask#Set the mask value]].<br />
<br />
=== TLS/SSL ===<br />
{{pkg|openssl}} provides TLS/SSL support and is installed by default on Arch installations.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, uncomment the following three lines:<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
Include conf/extra/httpd-ssl.conf<br />
<br />
Create a private key and self-signed certificate. This is adequate for most installations that do not require a [[wikipedia:Certificate_signing_request|CSR]]:<br />
<br />
# cd /etc/httpd/conf<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt -days 1095<br />
# chmod 400 server.key<br />
# chmod 444 server.crt<br />
<br />
{{Note|The -days switch is optional and RSA keysize can be as low as 2048 (default).}}<br />
<br />
Then make sure the {{ic|SSLCertificateFile}} and {{ic|SSLCertificateKeyFile}} lines in {{ic|/etc/httpd/conf/extra/httpd-ssl.conf}} point to the key and certificate you have just created.<br />
<br />
If you need to create a [[wikipedia:Certificate signing request|CSR]], follow these keygen instructions instead of the above:<br />
<br />
# openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out server.key<br />
# chmod 400 server.key<br />
# openssl req -new -sha256 -key server.key -out server.csr<br />
# openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt<br />
<br />
{{Note | For more openssl options, read the [https://www.openssl.org/docs/apps/openssl.html man page] or peruse openssl's [https://www.openssl.org/docs/ extensive documentation].}}<br />
<br />
{{Warning|If you plan on implementing SSL/TLS, know that some variations and implementations are [https://weakdh.org/#affected still] [[wikipedia:Transport_Layer_Security#Attacks_against_TLS.2FSSL|vulnerable to attack]]. For details on these current vulnerabilities within SSL/TLS and how to apply appropriate changes to the web server, visit http://disablessl3.com/ and https://weakdh.org/sysadmin.html}}<br />
<br />
{{Tip|Mozilla has a useful [https://wiki.mozilla.org/Security/Server_Side_TLS SSL/TLS article] which includes [https://wiki.mozilla.org/Security/Server_Side_TLS#Apache Apache specific] configuration guidelines as well as an [https://mozilla.github.io/server-side-tls/ssl-config-generator/ automated tool] to help create a more secure configuration.}}<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
=== Virtual hosts ===<br />
<br />
{{Note|You will need to add a separate <VirtualHost dommainame:443> section for virtual host SSL support.<br />
See [[#Managing many virtual hosts]] for an example file.}}<br />
<br />
If you want to have more than one host, uncomment the following line in {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include conf/extra/httpd-vhosts.conf<br />
<br />
In {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}} set your virtual hosts. The default file contains an elaborate example that should help you get started.<br />
<br />
To test the virtual hosts on you local machine, add the virtual names to your {{ic|/etc/hosts}} file:<br />
127.0.0.1 domainname1.dom <br />
127.0.0.1 domainname2.dom<br />
<br />
Restart {{ic|httpd.service}} to apply any changes.<br />
<br />
==== Managing many virtual hosts ====<br />
<br />
If you have a huge amount of virtual hosts, you may want to easily disable and enable them. It is recommended to create one configuration file per virtual host and store them all in one folder, eg: {{ic|/etc/httpd/conf/vhosts}}.<br />
<br />
First create the folder:<br />
# mkdir /etc/httpd/conf/vhosts<br />
<br />
Then place the single configuration files in it:<br />
# nano /etc/httpd/conf/vhosts/domainname1.dom<br />
# nano /etc/httpd/conf/vhosts/domainname2.dom<br />
...<br />
<br />
In the last step, {{ic|Include}} the single configurations in your {{ic|/etc/httpd/conf/httpd.conf}}:<br />
#Enabled Vhosts:<br />
Include conf/vhosts/domainname1.dom<br />
Include conf/vhosts/domainname2.dom<br />
<br />
You can enable and disable single virtual hosts by commenting or uncommenting them.<br />
<br />
A very basic vhost file will look like this:<br />
<br />
{{hc|/etc/httpd/conf/vhosts/domainname1.dom|<nowiki><br />
<VirtualHost domainname1.dom:80><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom<br />
ServerAlias domainname1.dom<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
</VirtualHost><br />
<br />
<VirtualHost domainname1.dom:443><br />
ServerAdmin webmaster@domainname1.dom<br />
DocumentRoot "/home/user/http/domainname1.dom"<br />
ServerName domainname1.dom:443<br />
ServerAlias domainname1.dom:443<br />
ErrorLog "/var/log/httpd/domainname1.dom-error_log"<br />
CustomLog "/var/log/httpd/domainname1.dom-access_log" common<br />
<br />
<Directory "/home/user/http/domainname1.dom"><br />
Require all granted<br />
</Directory><br />
<br />
SSLEngine on<br />
SSLCertificateFile "/etc/httpd/conf/apache.crt"<br />
SSLCertificateKeyFile "/etc/httpd/conf/apache.key"<br />
</VirtualHost></nowiki>}}<br />
<br />
== Extensions ==<br />
<br />
=== PHP ===<br />
To install [[PHP]], first [[install]] the {{Pkg|php}} and {{Pkg|php-apache}} packages.<br />
<br />
In {{ic|/etc/httpd/conf/httpd.conf}}, comment the line:<br />
LoadModule mpm_event_module modules/mod_mpm_event.so<br />
and uncomment the line:<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
<br />
{{Note|1=The above is required, because {{ic|libphp7.so}} included with {{pkg|php-apache}} does not work with {{ic|mod_mpm_event}}, but will only work {{ic|mod_mpm_prefork}} instead. ({{bug|39218}})<br />
<br />
Otherwise you will get the following error:<br />
{{bc|1=Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
AH00013: Pre-configuration failed<br />
httpd.service: control process exited, code=exited status=1}}<br />
<br />
As an alternative, you can use {{ic|mod_proxy_fcgi}} (see [[#Using php-fpm and mod_proxy_fcgi]] below).<br />
}}<br />
<br />
To enable PHP, add these lines to {{ic|/etc/httpd/conf/httpd.conf}}:<br />
*Place this in the {{ic|LoadModule}} list anywhere after {{ic|LoadModule dir_module modules/mod_dir.so}}:<br />
LoadModule php7_module modules/libphp7.so<br />
*Place this at the end of the {{ic|Include}} list:<br />
Include conf/extra/php7_module.conf<br />
<br />
Restart {{ic|httpd.service}} [[systemd#Using units|using systemd]]<br />
<br />
To test whether PHP was correctly configured: create a file called {{ic|test.php}} in your Apache {{ic|DocumentRoot}} directory (e.g. {{ic|/srv/http/}} or {{ic|~/public_html}}) with the following contents:<br />
<?php phpinfo(); ?><br />
To see if it works go to: http://localhost/test.php or http://localhost/~myname/test.php<br />
<br />
For advanced configuration and extensions, please read [[PHP]].<br />
<br />
==== Using php-fpm and mod_proxy_fcgi ====<br />
<br />
{{Note|Unlike the widespread setup with ProxyPass, the proxy configuration with SetHandler respects other Apache directives like DirectoryIndex. This ensures a better compatibility with software designed for libphp7, mod_fastcgi and mod_fcgid.<br />
If you still want to try ProxyPass, experiment with a line like this: {{bc|ProxyPassMatch ^/(.*\.php(/.*)?)$ unix:/run/php-fpm/php-fpm.sock&#124;fcgi://localhost/srv/http/$1}}}}<br />
<br />
[[Install]] the {{pkg|php-fpm}} package.<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fpm.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fpm.conf|<nowiki><br />
<FilesMatch \.php$><br />
SetHandler "proxy:unix:/run/php-fpm/php-fpm.sock|fcgi://localhost/"<br />
</FilesMatch><br />
<Proxy "fcgi://localhost/" enablereuse=on max=10><br />
</Proxy><br />
<IfModule dir_module><br />
DirectoryIndex index.php index.html<br />
</IfModule><br />
</nowiki>}}<br />
<br />
And include it at the bottom of {{ic|/etc/httpd/conf/httpd.conf}}:<br />
Include /etc/php/conf/php-fpm.conf<br />
<br />
{{Note|The pipe between {{ic|sock}} and {{ic|fcgi}} is not allowed to be surrounded by a space! {{ic|localhost}} can be replaced by any string but it should match in {{ic|SetHandler}} and {{ic|Proxy}} directives. More [https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html here]. {{ic|SetHandler}} and {{ic|Proxy}} can be used per vhost configs but the name after {{ic|fcgi://}} should differ for each vhost setup.}}<br />
<br />
You can configure PHP-FPM in {{ic|/etc/php/php-fpm.d/www.conf}}, but the default setup should work fine.<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}} and {{ic|php-fpm.service}}.<br />
<br />
==== Using apache2-mpm-worker and mod_fcgid ====<br />
[[Install]] the {{pkg|mod_fcgid}} and {{Pkg|php-cgi}} packages.<br />
<br />
Create the needed directory and symlink it for the PHP wrapper:<br />
# mkdir /srv/http/fcgid-bin<br />
# ln -s /usr/bin/php-cgi /srv/http/fcgid-bin/php-fcgid-wrapper<br />
<br />
Uncomment following in {{ic|/etc/conf.d/apache}}:<br />
HTTPD=/usr/bin/httpd.worker<br />
<br />
Create {{ic|/etc/httpd/conf/extra/php-fcgid.conf}} with the following content:<br />
{{hc|/etc/httpd/conf/extra/php-fcgid.conf|<nowiki><br />
# Required modules: fcgid_module<br />
<br />
<IfModule fcgid_module><br />
AddHandler php-fcgid .php<br />
AddType application/x-httpd-php .php<br />
Action php-fcgid /fcgid-bin/php-fcgid-wrapper<br />
ScriptAlias /fcgid-bin/ /srv/http/fcgid-bin/<br />
SocketPath /var/run/httpd/fcgidsock<br />
SharememPath /var/run/httpd/fcgid_shm<br />
# If you don't allow bigger requests many applications may fail (such as WordPress login)<br />
FcgidMaxRequestLen 536870912<br />
# Path to php.ini – defaults to /etc/phpX/cgi<br />
DefaultInitEnv PHPRC=/etc/php/<br />
# Number of PHP childs that will be launched. Leave undefined to let PHP decide.<br />
#DefaultInitEnv PHP_FCGI_CHILDREN 3<br />
# Maximum requests before a process is stopped and a new one is launched<br />
#DefaultInitEnv PHP_FCGI_MAX_REQUESTS 5000<br />
<Location /fcgid-bin/><br />
SetHandler fcgid-script<br />
Options +ExecCGI<br />
</Location><br />
</IfModule><br />
</nowiki>}}<br />
<br />
Edit {{ic|/etc/httpd/conf/httpd.conf}}, and add the following lines:<br />
LoadModule fcgid_module modules/mod_fcgid.so<br />
Include conf/extra/httpd-mpm.conf<br />
Include conf/extra/php-fcgid.conf<br />
<br />
{{Note|<br />
If you have added the following lines to {{ic|httpd.conf}}, remove them, as they are no longer needed:<br />
LoadModule php7_module modules/libphp7.so<br />
Include conf/extra/php7_module.conf<br />
}}<br />
<br />
[[Restart]] {{ic|httpd.service}}.<br />
<br />
==== MySQL/MariaDB ====<br />
<br />
Follow the instructions in [[PHP#MySQL/MariaDB]].<br />
<br />
When configuration is complete, [[restart]] {{ic|httpd.service}} to apply all the changes.<br />
<br />
=== HTTP2 ===<br />
<br />
To enable HTTP/2 support, uncomment the following line in {{ic|httpd.conf}}:<br />
LoadModule http2_module modules/mod_http2.so<br />
<br />
And add the following line:<br />
Protocols h2 http/1.1<br />
<br />
For more information, see the [https://httpd.apache.org/docs/2.4/mod/mod_http2.html mod_http2] documentation.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Apache Status and Logs ===<br />
<br />
See the status of the Apache daemon with [[systemctl]].<br />
<br />
Apache logs can be found in {{ic|/var/log/httpd/}}<br />
<br />
=== Error: PID file /run/httpd/httpd.pid not readable (yet?) after start ===<br />
<br />
Comment out the unique_id_module: {{ic|#LoadModule unique_id_module modules/mod_unique_id.so}}<br />
<br />
=== Upgrading Apache to 2.4 from 2.2 ===<br />
<br />
If you use {{Pkg|php-apache}}, follow the instructions at [[#PHP]] above.<br />
<br />
Access Control has changed. Convert all {{ic|Order}}, {{ic|Allow}}, {{ic|Deny}} and {{ic|Satisfy}} directives to the new {{ic|Require}} syntax. [http://httpd.apache.org/docs/2.4/mod/mod_access_compat.html mod_access_compat] allows you to use the deprecated format during a transition phase.<br />
<br />
More information: [http://httpd.apache.org/docs/2.4/upgrading.html Upgrading to 2.4 from 2.2]<br />
<br />
=== Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. ===<br />
<br />
If when loading {{ic|php7_module}} the {{ic|httpd.service}} fails, and you get an error like this in the journal:<br />
<br />
Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.<br />
<br />
you need to replace {{ic|mpm_event_module}} with {{ic|mpm_prefork_module}}:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
<s>LoadModule mpm_event_module modules/mod_mpm_event.so</s><br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
=== AH00534: httpd: Configuration error: No MPM loaded. ===<br />
<br />
You might encounter this error after a recent upgrade. This is only the result of a recent change in {{ic|httpd.conf}} that you might not have reproduced in your local configuration.<br />
To fix it, uncomment the following line.<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so<br />
}}<br />
<br />
Also check [[#Apache_is_running_a_threaded_MPM.2C_but_your_PHP_Module_is_not_compiled_to_be_threadsafe.|the above]] if more errors occur afterwards.<br />
<br />
=== Changing the max_execution_time in php.ini has no effect ===<br />
<br />
If you changed the {{ic|max_execution_time}} in {{ic|php.ini}} to a value greater than 30 (seconds), you may still get a {{ic|503 Service Unavailable}} response from Apache after 30 seconds. To solve this, add a {{ic|ProxyTimeout}} directive to your http configuration right before the {{ic|<FilesMatch \.php$>}} block:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
ProxyTimeout 300<br />
}}<br />
<br />
and restart {{ic|httpd.service}}.<br />
<br />
== See also ==<br />
<br />
* [http://www.apache.org/ Apache Official Website]<br />
* [http://www.akadia.com/services/ssh_test_certificate.html Tutorial for creating self-signed certificates]<br />
* [http://wiki.apache.org/httpd/CommonMisconfigurations Apache Wiki Troubleshooting]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS&diff=410476ZFS2015-11-28T21:26:51Z<p>Wolfdogg: </p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabyte's]] storage with no limit on number of filesystem's (datasets) or file's[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Read the [[Mkinitcpio]] wiki entry for a general understanding of the initial ramdisk environment, and adding the dkms hook [[Mkinitcpio#HOOKS]].<br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-git}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Automated build script ===<br />
<br />
{{Deletion|The wiki isn't the place to maintain massive script dumps}}<br />
<br />
The following script may be used to build ZFS and its dependencies automatically.<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
When ZFS is used as a data drive and boot support is not needed, these two shell scripts will build and remove all zfs packages. The only requirements are {{pkg|sudo}}, {{pkg|git}}, and answering a couple of prompts. On each kernel upgrade you remove ZFS with {{ic|zfsun.sh}}, update, and install ZFS with {{ic|zfsbuild.sh}}.<br />
{{hc|~/build/zfspkg/zfsbuild.sh|<nowiki><br />
#!/usr/bin/bash<br />
#<br />
# 2015-07-17 zfsbuild.sh by severach for AUR 4<br />
# 2015-08-08 AUR4 -> AUR, added git pull, safer AUR 3.5 update folder<br />
# Adapted from ZFS Builder by graysky<br />
# place this in a user home folder.<br />
# I recommend ~/build/zfspkg/. Do not name the folder 'zfs'.<br />
<br />
# 1 to add conflicts=(linux>,linux<) which offers automatic removal on upgrade.<br />
# Manual removal with zfsun.sh is preferred.<br />
_opt_AutoRemove=0<br />
_opt_ZFSPool='zfsdata'<br />
#_opt_ZFSbyid='/dev/disk/by-partlabel'<br />
_opt_ZFSbyid='/dev/disk/by-id'<br />
# '' for manual answer to prompts. --noconfirm to go ahead and do it all.<br />
_opt_AutoInstall='' #--noconfirm'<br />
<br />
# Multiprocessor compile enabled!<br />
# Huuuuuuge performance improvement. Watch in htop.<br />
# An E3-1245 can peg all 8 processors.<br />
#1 [|||||||||||||||||||||||||96.2%]<br />
#2 [|||||||||||||||||||||||||97.6%]<br />
#3 [|||||||||||||||||||||||||95.7%]<br />
#4 [|||||||||||||||||||||||||96.7%]<br />
#5 [|||||||||||||||||||||||||95.7%]<br />
#6 [|||||||||||||||||||||||||97.1%]<br />
#7 [|||||||||||||||||||||||||98.6%]<br />
#8 [|||||||||||||||||||||||||96.2%]<br />
#Mem[||| 596/31974MB]<br />
#Swp[ 0/0MB]<br />
<br />
set -u<br />
set -e<br />
<br />
if [ "${EUID}" -eq 0 ]; then<br />
echo "This script must NOT be run as root"<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
for i in 'sudo' 'git'; do<br />
command -v "${i}" >/dev/null 2>&1 || {<br />
echo "I require ${i} but it's not installed. Aborting." 1>&2<br />
exit 1; }<br />
done<br />
<br />
cd "$(dirname "$0")"<br />
OPWD="$(pwd)"<br />
for cwpackage in 'spl-utils-git' 'spl-git' 'zfs-utils-git' 'zfs-git'; do<br />
#cower -dc -f "${cwpackage}"<br />
if [ -d "${cwpackage}" -a ! -d "${cwpackage}/.git" ]; then<br />
echo "${cwpackage}: Convert AUR3.5 to AUR4"<br />
cd "${cwpackage}"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" "${cwpackage}.temp"<br />
cd "${cwpackage}.temp"<br />
mv '.git' ..<br />
cd ..<br />
rm -rf "${cwpackage}.temp"<br />
cd ..<br />
fi<br />
if [ -d "${cwpackage}" ]; then<br />
echo "${cwpackage}: Update local copy"<br />
cd "${cwpackage}"<br />
git fetch<br />
git reset --hard 'origin/master'<br />
git pull # this line was missed in previous versions<br />
else<br />
echo "${cwpackage}: Clone to new folder"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" <br />
cd "${cwpackage}"<br />
fi<br />
sed -i -e 's:^\s\+make$:'"& -s -j $(nproc):g" 'PKGBUILD'<br />
if [ "${_opt_AutoRemove}" -ne 0 ]; then<br />
sed -i -e 's:^conflicts=(.*$: &\n_kernelversionsmall="`uname -r | cut -d - -f 1`"\nconflicts+=("linux>${_kernelversionsmall}" "linux<${_kernelversionsmall}")\n:g' 'PKGBUILD'<br />
fi<br />
if ! makepkg -sCcfi ${_opt_AutoInstall}; then<br />
cd "${OPWD}"<br />
break<br />
fi<br />
#rm -rf 'zfs' 'spl'<br />
cd "${OPWD}"<br />
done<br />
which fsck.zfs<br />
if [ "$?" -eq 0 ]; then<br />
sudo mkinitcpio -p 'linux' # Stores fsck.zfs into the initrd image. I don't know why it would be needed.<br />
fi<br />
#sudo zpool import "${_opt_ZFSPool}" # Don't do this or zpool will mount via /dev/sd?, which you won't like!<br />
sudo zpool import -d "${_opt_ZFSbyid}" "${_opt_ZFSPool}"<br />
sudo zpool status<br />
sudo -k<br />
</nowiki>}}<br />
<br />
{{hc|~/build/zfspkg/zfsun.sh|<nowiki><br />
#!/usr/bin/bash<br />
<br />
# 2015-07-17 zfs uninstaller by severach for AUR4<br />
# Removing ZFS forgets to unmount the pools, which might be desirable if you're<br />
# running ZFS on the root file system.<br />
<br />
_opt_ZFSFolder='/home/zfsdata/foo'<br />
_opt_ZFSPool='zfsdata'<br />
<br />
if [ "${EUID}" -ne 0 ]; then<br />
echo 'Must be root, try sudo !!'<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
systemctl stop 'smbd.service' # Active shares can lock the mount. You might want to stop nfs too.<br />
zpool export "${_opt_ZFSPool}" # zpool import no longer works with drives that were zfs umount<br />
if [ ! -d "${_opt_ZFSFolder}" ]; then<br />
echo "${_opt_ZFSPool} exported"<br />
pacman -Rc 'spl-utils-git' # This works even if some are already removed.<br />
#pacman -R 'zfs-utils-git' 'spl-git' 'spl-utils-git' 'zfs-git'<br />
else<br />
echo "ZFS didn't unmount"<br />
fi<br />
systemctl start 'smbd.service'<br />
</nowiki>}}<br />
<br />
=== Bindmount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
==== systemd mount unit ====<br />
<br />
If it is not possible too bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready, you can overcome this limitation with a systemd mount unit can be used for the bind mount. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS&diff=410475ZFS2015-11-28T21:25:55Z<p>Wolfdogg: adjusted statistics by a quadrillion times :-)</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabyte's]] storage with no limit on number of filesystem's (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Read the [[Mkinitcpio]] wiki entry for a general understanding of the initial ramdisk environment, and adding the dkms hook [[Mkinitcpio#HOOKS]].<br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-git}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Automated build script ===<br />
<br />
{{Deletion|The wiki isn't the place to maintain massive script dumps}}<br />
<br />
The following script may be used to build ZFS and its dependencies automatically.<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
When ZFS is used as a data drive and boot support is not needed, these two shell scripts will build and remove all zfs packages. The only requirements are {{pkg|sudo}}, {{pkg|git}}, and answering a couple of prompts. On each kernel upgrade you remove ZFS with {{ic|zfsun.sh}}, update, and install ZFS with {{ic|zfsbuild.sh}}.<br />
{{hc|~/build/zfspkg/zfsbuild.sh|<nowiki><br />
#!/usr/bin/bash<br />
#<br />
# 2015-07-17 zfsbuild.sh by severach for AUR 4<br />
# 2015-08-08 AUR4 -> AUR, added git pull, safer AUR 3.5 update folder<br />
# Adapted from ZFS Builder by graysky<br />
# place this in a user home folder.<br />
# I recommend ~/build/zfspkg/. Do not name the folder 'zfs'.<br />
<br />
# 1 to add conflicts=(linux>,linux<) which offers automatic removal on upgrade.<br />
# Manual removal with zfsun.sh is preferred.<br />
_opt_AutoRemove=0<br />
_opt_ZFSPool='zfsdata'<br />
#_opt_ZFSbyid='/dev/disk/by-partlabel'<br />
_opt_ZFSbyid='/dev/disk/by-id'<br />
# '' for manual answer to prompts. --noconfirm to go ahead and do it all.<br />
_opt_AutoInstall='' #--noconfirm'<br />
<br />
# Multiprocessor compile enabled!<br />
# Huuuuuuge performance improvement. Watch in htop.<br />
# An E3-1245 can peg all 8 processors.<br />
#1 [|||||||||||||||||||||||||96.2%]<br />
#2 [|||||||||||||||||||||||||97.6%]<br />
#3 [|||||||||||||||||||||||||95.7%]<br />
#4 [|||||||||||||||||||||||||96.7%]<br />
#5 [|||||||||||||||||||||||||95.7%]<br />
#6 [|||||||||||||||||||||||||97.1%]<br />
#7 [|||||||||||||||||||||||||98.6%]<br />
#8 [|||||||||||||||||||||||||96.2%]<br />
#Mem[||| 596/31974MB]<br />
#Swp[ 0/0MB]<br />
<br />
set -u<br />
set -e<br />
<br />
if [ "${EUID}" -eq 0 ]; then<br />
echo "This script must NOT be run as root"<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
for i in 'sudo' 'git'; do<br />
command -v "${i}" >/dev/null 2>&1 || {<br />
echo "I require ${i} but it's not installed. Aborting." 1>&2<br />
exit 1; }<br />
done<br />
<br />
cd "$(dirname "$0")"<br />
OPWD="$(pwd)"<br />
for cwpackage in 'spl-utils-git' 'spl-git' 'zfs-utils-git' 'zfs-git'; do<br />
#cower -dc -f "${cwpackage}"<br />
if [ -d "${cwpackage}" -a ! -d "${cwpackage}/.git" ]; then<br />
echo "${cwpackage}: Convert AUR3.5 to AUR4"<br />
cd "${cwpackage}"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" "${cwpackage}.temp"<br />
cd "${cwpackage}.temp"<br />
mv '.git' ..<br />
cd ..<br />
rm -rf "${cwpackage}.temp"<br />
cd ..<br />
fi<br />
if [ -d "${cwpackage}" ]; then<br />
echo "${cwpackage}: Update local copy"<br />
cd "${cwpackage}"<br />
git fetch<br />
git reset --hard 'origin/master'<br />
git pull # this line was missed in previous versions<br />
else<br />
echo "${cwpackage}: Clone to new folder"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" <br />
cd "${cwpackage}"<br />
fi<br />
sed -i -e 's:^\s\+make$:'"& -s -j $(nproc):g" 'PKGBUILD'<br />
if [ "${_opt_AutoRemove}" -ne 0 ]; then<br />
sed -i -e 's:^conflicts=(.*$: &\n_kernelversionsmall="`uname -r | cut -d - -f 1`"\nconflicts+=("linux>${_kernelversionsmall}" "linux<${_kernelversionsmall}")\n:g' 'PKGBUILD'<br />
fi<br />
if ! makepkg -sCcfi ${_opt_AutoInstall}; then<br />
cd "${OPWD}"<br />
break<br />
fi<br />
#rm -rf 'zfs' 'spl'<br />
cd "${OPWD}"<br />
done<br />
which fsck.zfs<br />
if [ "$?" -eq 0 ]; then<br />
sudo mkinitcpio -p 'linux' # Stores fsck.zfs into the initrd image. I don't know why it would be needed.<br />
fi<br />
#sudo zpool import "${_opt_ZFSPool}" # Don't do this or zpool will mount via /dev/sd?, which you won't like!<br />
sudo zpool import -d "${_opt_ZFSbyid}" "${_opt_ZFSPool}"<br />
sudo zpool status<br />
sudo -k<br />
</nowiki>}}<br />
<br />
{{hc|~/build/zfspkg/zfsun.sh|<nowiki><br />
#!/usr/bin/bash<br />
<br />
# 2015-07-17 zfs uninstaller by severach for AUR4<br />
# Removing ZFS forgets to unmount the pools, which might be desirable if you're<br />
# running ZFS on the root file system.<br />
<br />
_opt_ZFSFolder='/home/zfsdata/foo'<br />
_opt_ZFSPool='zfsdata'<br />
<br />
if [ "${EUID}" -ne 0 ]; then<br />
echo 'Must be root, try sudo !!'<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
systemctl stop 'smbd.service' # Active shares can lock the mount. You might want to stop nfs too.<br />
zpool export "${_opt_ZFSPool}" # zpool import no longer works with drives that were zfs umount<br />
if [ ! -d "${_opt_ZFSFolder}" ]; then<br />
echo "${_opt_ZFSPool} exported"<br />
pacman -Rc 'spl-utils-git' # This works even if some are already removed.<br />
#pacman -R 'zfs-utils-git' 'spl-git' 'spl-utils-git' 'zfs-git'<br />
else<br />
echo "ZFS didn't unmount"<br />
fi<br />
systemctl start 'smbd.service'<br />
</nowiki>}}<br />
<br />
=== Bindmount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
==== systemd mount unit ====<br />
<br />
If it is not possible too bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready, you can overcome this limitation with a systemd mount unit can be used for the bind mount. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=408872Talk:Dual boot with Windows2015-11-11T09:22:17Z<p>Wolfdogg: /* MBR-BIOS HP Laptop */</p>
<hr />
<div>== Existing installations ==<br />
<br />
Todo: add directions for dual booting when Arch is already installed. {{Unsigned|07:29, 29 August 2007|Kc8tpz}}<br />
:With efi instead of bios (and gpt partitioning?), this can be as simple as just installing Windows to a second partition, and using efi to choose bootloader. This was done on a Dell Latitude E6520. {{Unsigned|03:45, 28 August 2011|Stoffi}}<br />
<br />
== First partition scheme ==<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical. {{Unsigned|14:40, 13 November 2009|Grey}}<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256. {{Unsigned|22:01, 19 December 2009|Dan39}}<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alad, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
<br />
Ahh, after further thkning on this, i probably misled the whole thing by the use case title having anything to with Laptops and HP, its not just a Use Case, its an ENTIRE revamp of the Dual Boot instructions, can we somehow incorporate it on the Dual Boot instructions page, labeling it how you will, so that those instructions are clear for any user to run through as a MBR-BIOS use case if necessary? The idea is it being some new official simple dual boot (as close to all encompassing for newbs) as can be, to where they can work off it how they choose, but as a simple foolproof starting point, pointing out places where its up to the user to make the choices necessary to apply it to their user? I need a good writer for that. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:16, 17 September 2015 (UTC)<br />
<br />
:The whole instructions for setting up dual boot can be summarized as:<br />
:# make sure there are at least 2 partitions (described in [[Partitioning]]),<br />
:# make sure that Arch is installed on one partition (described in [[Installation guide]]) and the other OS on the other partition ('''cannot''' be described on the ArchWiki),<br />
:# (re)install the bootloader (described in [[Windows_and_Arch_dual_boot#Installation]] and specific pages for each boot loader)<br />
:There is no point in duplicating the instructions for all 3 steps on a single page. Also, your instructions consider only one possible scenario, they won't apply for example if the other OS is already installed.<br />
:-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 08:32, 18 September 2015 (UTC)<br />
<br />
=== MBR-BIOS ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI this was for HP laptop i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=408752Talk:Dual boot with Windows2015-11-10T04:53:40Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>== Existing installations ==<br />
<br />
Todo: add directions for dual booting when Arch is already installed. {{Unsigned|07:29, 29 August 2007|Kc8tpz}}<br />
:With efi instead of bios (and gpt partitioning?), this can be as simple as just installing Windows to a second partition, and using efi to choose bootloader. This was done on a Dell Latitude E6520. {{Unsigned|03:45, 28 August 2011|Stoffi}}<br />
<br />
== First partition scheme ==<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical. {{Unsigned|14:40, 13 November 2009|Grey}}<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256. {{Unsigned|22:01, 19 December 2009|Dan39}}<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
Im pretty pissed that these didnt make it into the wiki, after all the time i spent on this. Looking at that shit wiki https://wiki.archlinux.org/index.php/Dual_boot_with_Windows, i am confused just looking at it! --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 04:52, 10 November 2015 (UTC)<br />
<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alad, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
<br />
Ahh, after further thkning on this, i probably misled the whole thing by the use case title having anything to with Laptops and HP, its not just a Use Case, its an ENTIRE revamp of the Dual Boot instructions, can we somehow incorporate it on the Dual Boot instructions page, labeling it how you will, so that those instructions are clear for any user to run through as a MBR-BIOS use case if necessary? The idea is it being some new official simple dual boot (as close to all encompassing for newbs) as can be, to where they can work off it how they choose, but as a simple foolproof starting point, pointing out places where its up to the user to make the choices necessary to apply it to their user? I need a good writer for that. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:16, 17 September 2015 (UTC)<br />
<br />
:The whole instructions for setting up dual boot can be summarized as:<br />
:# make sure there are at least 2 partitions (described in [[Partitioning]]),<br />
:# make sure that Arch is installed on one partition (described in [[Installation guide]]) and the other OS on the other partition ('''cannot''' be described on the ArchWiki),<br />
:# (re)install the bootloader (described in [[Windows_and_Arch_dual_boot#Installation]] and specific pages for each boot loader)<br />
:There is no point in duplicating the instructions for all 3 steps on a single page. Also, your instructions consider only one possible scenario, they won't apply for example if the other OS is already installed.<br />
:-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 08:32, 18 September 2015 (UTC)<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg<br />
<br />
== <s>Work on the the Windows XP/2000 bootloader section</s> ==<br />
<br />
The current content redirects to Geocities, which doesn't exist anymore and can't be referenced to bring the content here. Should we just scrape it? say it's no longer available?<br />
<br />
{{Unsigned|21:00, 11 October 2015|GutoAndreollo}}<br />
<br />
:Removed the section [https://wiki.archlinux.org/index.php?title=Windows_and_Arch_dual_boot&diff=404268&oldid=398197], thanks for reporting. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 20:50, 11 October 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=408751Talk:Dual boot with Windows2015-11-10T04:52:08Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>== Existing installations ==<br />
<br />
Todo: add directions for dual booting when Arch is already installed. {{Unsigned|07:29, 29 August 2007|Kc8tpz}}<br />
:With efi instead of bios (and gpt partitioning?), this can be as simple as just installing Windows to a second partition, and using efi to choose bootloader. This was done on a Dell Latitude E6520. {{Unsigned|03:45, 28 August 2011|Stoffi}}<br />
<br />
== First partition scheme ==<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical. {{Unsigned|14:40, 13 November 2009|Grey}}<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256. {{Unsigned|22:01, 19 December 2009|Dan39}}<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
Im pretty pissed that these didnt make it into the wiki, after all the time i spent on this. Looking at that shit wiki, i am confused just looking at it! --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 04:52, 10 November 2015 (UTC)<br />
<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alad, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
<br />
Ahh, after further thkning on this, i probably misled the whole thing by the use case title having anything to with Laptops and HP, its not just a Use Case, its an ENTIRE revamp of the Dual Boot instructions, can we somehow incorporate it on the Dual Boot instructions page, labeling it how you will, so that those instructions are clear for any user to run through as a MBR-BIOS use case if necessary? The idea is it being some new official simple dual boot (as close to all encompassing for newbs) as can be, to where they can work off it how they choose, but as a simple foolproof starting point, pointing out places where its up to the user to make the choices necessary to apply it to their user? I need a good writer for that. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:16, 17 September 2015 (UTC)<br />
<br />
:The whole instructions for setting up dual boot can be summarized as:<br />
:# make sure there are at least 2 partitions (described in [[Partitioning]]),<br />
:# make sure that Arch is installed on one partition (described in [[Installation guide]]) and the other OS on the other partition ('''cannot''' be described on the ArchWiki),<br />
:# (re)install the bootloader (described in [[Windows_and_Arch_dual_boot#Installation]] and specific pages for each boot loader)<br />
:There is no point in duplicating the instructions for all 3 steps on a single page. Also, your instructions consider only one possible scenario, they won't apply for example if the other OS is already installed.<br />
:-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 08:32, 18 September 2015 (UTC)<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg<br />
<br />
== <s>Work on the the Windows XP/2000 bootloader section</s> ==<br />
<br />
The current content redirects to Geocities, which doesn't exist anymore and can't be referenced to bring the content here. Should we just scrape it? say it's no longer available?<br />
<br />
{{Unsigned|21:00, 11 October 2015|GutoAndreollo}}<br />
<br />
:Removed the section [https://wiki.archlinux.org/index.php?title=Windows_and_Arch_dual_boot&diff=404268&oldid=398197], thanks for reporting. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 20:50, 11 October 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=400371Talk:Dual boot with Windows2015-09-17T23:17:14Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>Todo: add directions for dual booting when Arch is already installed. +2<br />
With efi instead of bios (and gpt partitioning?), this can be as simple as just installing<br />
Windows to a second partition, and using efi to choose bootloader. This was done on a<br />
Dell Latitude E6520.<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical.<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256.<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alan, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
<br />
Ahh, after further thkning on this, i probably misled the whole thing by the use case title having anything to with Laptops and HP, its not just a Use Case, its an ENTIRE revamp of the Dual Boot instructions, can we somehow incorporate it on the Dual Boot instructions page, labeling it how you will, so that those instructions are clear for any user to run through as a MBR-BIOS use case if necessary? The idea is it being some new official simple dual boot (as close to all encompassing for newbs) as can be, to where they can work off it how they choose, but as a simple foolproof starting point, pointing out places where its up to the user to make the choices necessary to apply it to their user? I need a good writer for that. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:16, 17 September 2015 (UTC)<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=400370Talk:Dual boot with Windows2015-09-17T23:16:00Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>Todo: add directions for dual booting when Arch is already installed. +2<br />
With efi instead of bios (and gpt partitioning?), this can be as simple as just installing<br />
Windows to a second partition, and using efi to choose bootloader. This was done on a<br />
Dell Latitude E6520.<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical.<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256.<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alan, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
<br />
Ahh, after further thkning on this, i probably misled the whole thing by the use case title having anything to with Laptops and HP, its not just a Use Case, its an ENTIRE revamp of the Dual Boot instructions, can we somehow incorporate it on the Dual Boot instructions page, labeling it how you will, so that those instructions are clear for any user to run through as a MBR-BIOS Use Case? The idea is it being some new official simple dual boot (as close to all encompassing for newbs) as can be, to where they can work off it how they choose, but as a simple foolproof starting point, pointing out places where its up to the user to make the choices necessary to apply it to their user? I need a good writer for that. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:16, 17 September 2015 (UTC)<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Dual_boot_with_Windows&diff=400369Talk:Dual boot with Windows2015-09-17T23:11:55Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>Todo: add directions for dual booting when Arch is already installed. +2<br />
With efi instead of bios (and gpt partitioning?), this can be as simple as just installing<br />
Windows to a second partition, and using efi to choose bootloader. This was done on a<br />
Dell Latitude E6520.<br />
<br />
The first partion scheme is impossible. If you have five partitions, one of them must be logical.<br />
<br />
== fs-driver only works with inode size 128 ==<br />
<br />
If you want to use fs-driver(Ext2 IFS) in windows to access ext2/3 filesystems, the inode size must be 128. this should probably be mentioned in article... ran into this problem myself. fs-driver will not mount my ext3 filesystem because it has inode size 256.<br />
<br />
== Mounting partitions and dual-boot ==<br />
<br />
:''Moved from [[Talk:Beginners' guide]]. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 07:45, 27 August 2015 (UTC)''<br />
<br />
And lastly, the surprisingly tricky bit about "mounting" partitions that do not belong to you on a dual boot system. Ultimately for me what ended up working was knowing which file systems the others could read (esp in a UEFI system). These things can't just be "linked" to because even the pages linked to don't have the information. I got quite a bit of help from friends and google. [[User:Victoroux|Victoroux]] ([[User talk:Victoroux|talk]]) 14:01, 7 June 2013 (UTC)<br />
<br />
== Use Cases ==<br />
<br />
:''[Moved from [[Windows and Arch dual boot#Use cases]]. This information should be deduplicated and added to a page in [[:Category:Laptops]], if applicable. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 05:59, 4 September 2015 (UTC)]''<br />
<br />
Alan, i think you might be off base on that one, that entire GOAL of that excercise was nothing to do with laptop, but to Demystify arch dual boot. Can we PLEASE put this back, its my new GOTO for dual boot on any windows arch system, no matter the case, whenst i dont want to deal with UEFI and or GPT due to windows or hardware limitations. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 23:11, 17 September 2015 (UTC)<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..<br />
<br />
-- 02:12, 4 September 2015 Wolfdogg</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398167Dual boot with Windows2015-09-04T00:12:33Z<p>Wolfdogg: /* Begin */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Now reboot, Success!<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398166Dual boot with Windows2015-09-04T00:10:57Z<p>Wolfdogg: /* Begin */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run dhcpcd<br />
# dhcpcd<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Success! Now reboot.<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398165Dual boot with Windows2015-09-04T00:09:47Z<p>Wolfdogg: /* Begin */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows (dont worry, it wont create the 2nd boot partition now, if it does, something went wrong try again)<br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network (easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run # dhcpcd)<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Success! Now reboot.<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398164Dual boot with Windows2015-09-04T00:08:02Z<p>Wolfdogg: /* Begin */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1MB or something (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows <br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network (easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run # dhcpcd)<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Success! Now reboot.<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398163Dual boot with Windows2015-09-04T00:03:21Z<p>Wolfdogg: /* Use Cases */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
==== Special notes ====<br />
* This method uses parted exclusively for all partitioning<br />
* I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
* In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
* FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
<br />
==== Begin ====<br />
Get yourself a readied drive. You might want to check it out with smartmon tools first, once you trust it and its ready for wipeage then follow the exact procedure below (with as little deviation as possible except for partition sizes).<br />
<br />
Note if your newer to parted, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
*Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1000000B (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows <br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network (easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run # dhcpcd)<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Success! Now reboot.<br />
<br />
==== Alternative grub install ====<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398162Dual boot with Windows2015-09-03T23:56:43Z<p>Wolfdogg: /* MBR-BIOS HP Laptop */</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
* Special notes<br />
** This method uses parted exclusively for all partitioning<br />
** I'ts assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
** In this scenario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
** FYI laptop was HP i3 generation Intel HP4520s#aba xt988ut<br />
Using parted, create the following partitions, adjusting start points where you see fit. If you don't want the free space, adjust the first start point to 1000000B (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
Note, pay close attention to the B,MB,GB figures, those are intended to aid in aligning your partitions correctly. This setup may vary on yours, but its highly recommended to keep trying until they are aligned properly first. <br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB(use this MB figure to aligned properly(may vary for you))<br />
end 531GB(use this MB figure to aligned properly(may vary for you))<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB(use this GB figure to aligned properly(may vary for you))<br />
end 750GB<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows <br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting with filesystem of your choice<br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
* use arch install wiki to mount drives, adjust mirrors, connect network (easiest way for now is to connect hard lan cable on dhcp avail network, then <br />
run # dhcpcd)<br />
* Follow arch install wiki for the following per norm;<br />
** install pac base, genfstab, arch-chroot, set your hostname, set localtime and locale-gen, compile kernel, set password, then continue<br />
*Install grub<br />
# pacman -S grub<br />
* backup the boot record to a file named "mbr-backup" on your root<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
* Install grub onto your disk sdx, replacing sdx with the disk you are working with(be careful).<br />
# grub-install --recheck /dev/sda<br />
* run os-prober to find your windows install<br />
# pacman -S os-prober<br />
# os-prober<br />
*You can also edit your default boot order here in /etc/default/grub by changing the order from 0 to 1,2, etc..<br />
* now compile a new grub boot config to finish things up..<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* Success! Now reboot.<br />
<br />
option 2<br />
* Instead of overriding windows boot partition with grub as we just did above, you can alternatively use the windows boot partition to point to grub, ONLY if you install GRUB to the partition, and not the drive boot record. i.e. dev/sdx1 instead of /dev/sdx<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit in windows to point a second boot to the grubbed partition.. good luck with that..</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398161Dual boot with Windows2015-09-03T23:39:39Z<p>Wolfdogg: adding instruction set for MBR-BIOS hp laptop</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
* Special notes<br />
** This method uses parted exclusively for all partioning<br />
** Its assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
** In this scensario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
<br />
Using parted, create the following partitions, adjusting start points where you see fit. If you dont want the free space, adjust the first start point to 1000000B (success is not guaranteed on any partition adjustments, for this scenario)<br />
<br />
{{bc|# parted<br />
mkpart ntfs <br />
start point 1000000B<br />
end 251000000B<br />
<br />
mkpart ext3 (for /)<br />
start 251000000B<br />
end 281000MB (so aligned, on my setup)<br />
<br />
mkpart ext3 (for /home)<br />
start 281000MB<br />
end 531GB(to align)<br />
<br />
mkpart fat32 (for /media/shared)<br />
start 531GB<br />
end 750GB(aligns)<br />
<br />
set 1 boot on<br />
}}<br />
<br />
quit parted..<br />
<br />
* reboot with windows cd<br />
* carefully selecting ONLY the 1st ntfs labeled-type partition you created above install windows <br />
* reboot into arch dvd <br />
* follow arch install instructions per standard installation continuing up to the partition creation point, which you will now SKIP over. <br />
* Start off with formatting. <br />
# mkfs.ext3 /dev/sda2<br />
# mkfs.ext3 /dev/sda3<br />
*mount drives<br />
*adjust mirrors<br />
*connect newwork<br />
** easiest way for now<br />
*** connect hard lan cable on dhcp avail network <br />
*** # dhcpcd<br />
* install pac base<br />
* genfstab -p /mnt >> /mnt/etc/fstab -U<br />
# arch-chroot /mnt<br />
# echo computer_name > /etc/hostname<br />
# ln -sf /usr/share/zoneinfo/zone/subzone /etc/localtime<br />
# locale-gen<br />
# mkinitcpio -p linux<br />
# passwd<br />
# pacman -S grub<br />
<br />
* -backup boot record<br />
# dd if=/dev/sda of=/mbr-backup bs=512 count=1<br />
<br />
# grub-install --recheck /dev/sda<br />
# pacman -S os-proper<br />
# os-prober<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
* successful reboot and grub isntallation!!<br />
<br />
option 2<br />
# grub-install --target=i386-pc --grub-setup=/bin/true --recheck --debug /dev/sda<br />
* continue with bcedit, use linked article on top</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Dual_boot_with_Windows&diff=398160Dual boot with Windows2015-09-03T23:34:52Z<p>Wolfdogg: </p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[es:Windows and Arch dual boot]]<br />
[[ja:Windows と Arch のデュアルブート]]<br />
[[ru:Windows and Arch dual boot]]<br />
[[sk:Windows and Arch dual boot]]<br />
[[zh-cn:Windows and Arch dual boot]]<br />
This is a simple article detailing different methods of Arch/Windows coexistence.<br />
<br />
== Important information ==<br />
<br />
=== Windows UEFI vs BIOS limitations ===<br />
<br />
Microsoft imposes limitations on which firmware boot mode and partitioning style can be supported based on the version of Windows used:<br />
<br />
* '''Windows XP''' both '''x86 32-bit''' and '''x86_64''' (also called x64) (RTM and all Service Packs) versions do not support booting in UEFI mode (IA32 or x86_64) from any disk (MBR or GPT) OR in BIOS mode from GPT disk. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' or '''7''' '''x86 32-bit''' (RTM and all Service Packs) versions support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. They do not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. They support only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista RTM x86_64''' (only RTM) version support booting in BIOS mode from MBR/msdos disks only, not from GPT disks. It does not support x86_64 UEFI or IA32 (x86 32-bit) UEFI boot. It supports only BIOS boot and only from MBR/msdos disk.<br />
* '''Windows Vista''' (SP1 and above, not RTM) and '''Windows 7''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 (x86 32-bit) UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
* '''Windows 8/8.1 x86 32-bit''' support booting in IA32 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support x86_64 UEFI boot from GPT/MBR disk, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk. On market, the only systems known to ship with IA32 (U)EFI are some old Intel Macs (pre-2010 models?) and Intel Atom System-on-Chip (Clover trail and Bay Trail) Windows Tablets. in which it boots ONLY in IA32 UEFI mode and ONLY from GPT disk.<br />
* '''Windows 8/8.1''' '''x86_64''' versions support booting in x86_64 UEFI mode from GPT disk only, OR in BIOS mode from MBR/msdos disk only. They do not support IA32 UEFI boot, x86_64 UEFI boot from MBR/msdos disk, or BIOS boot from GPT disk.<br />
<br />
In case of pre-installed Systems:<br />
<br />
* All systems pre-installed with Windows XP, Vista or 7 32-bit, irrespective of Service Pack level, bitness, edition (SKU)or presence of UEFI support in firmware, boot in BIOS-MBR mode by default.<br />
* MOST of the systems pre-installed with Windows 7 x86_64, irrespective of Service Pack level, bitness or edition (SKU), boot in BIOS-MBR mode by default. Very few recent systems pre-installed with Windows 7 are known to boot in x86_64 UEFI-GPT mode by default.<br />
* ALL systems pre-installed with Windows 8/8.1 boot in UEFI-GPT mode. The firmware bitness matches the bitness of Windows, ie. x86_64 Windows 8/8.1 boot in x86_64 UEFI mode and 32-bit Windows 8/8.1 boot in IA32 UEFI mode.<br />
<br />
The best way to detect the boot mode of Windows is to do the following (info from [http://www.eightforums.com/tutorials/29504-bios-mode-see-if-windows-boot-uefi-legacy-mode.html here]):<br />
<br />
* Boot into Windows<br />
* Press Win key and 'R' to start the Run dialog<br />
* In the Run dialog type "msinfo32" and press Enter<br />
* In the '''System Information''' windows, select '''System Summary''' on the left and check the value of '''BIOS mode''' item on the right<br />
* If the value is '''UEFI''', Windows boots in UEFI-GPT mode. If the value is '''Legacy''', Windows boots in BIOS-MBR mode.<br />
<br />
In general, Windows forces type of partitioning depending on the firmware mode used, i.e. if Windows is booted in UEFI mode, it can be installed only to a GPT disk. If the Windows is booted in Legacy BIOS mode, it can be installed only to a MBR (also called '''msdos''' style partitioning) disk. This is a limitation enforced by Windows installer, and as of April 2014 there is no officially (Microsoft) supported way of installing Windows in UEFI-MBR or BIOS-GPT configuration. Thus Windows only supports either UEFI-GPT boot or BIOS-MBR configuration.<br />
<br />
Such a limitation is not enforced by the Linux kernel, but can depend on which bootloader is used and/or how the bootloader is configured. The Windows limitation should be considered if the user wishes to boot Windows and Linux from the same disk, since installation procedure of bootloader depends on the firmware type and disk partitioning configuration. In case where Windows and Linux dual boot from the same disk, it is advisable to follow the method used by Windows, ie. either go for UEFI-GPT boot or BIOS-MBR boot. See http://support.microsoft.com/kb/2581408 for more info.<br />
<br />
=== Install media limitations ===<br />
<br />
Intel Atom System-on-Chip Tablets (Clover trail and Bay Trail) provide only IA32 UEFI firmware WITHOUT Legacy BIOS (CSM) support (unlike most of the x86_64 UEFI systems), due to Microsoft Connected Standby Guidelines for OEMs. Due to lack of Legacy BIOS support in these systems, and the lack of 32-bit UEFI boot in Arch Official Install ISO or the Archboot iso (as of April 2014), these install media cannot boot in Atom SoC tablets pre-installed with Windows 8/8.1 32-bit.<br />
<br />
=== Bootloader UEFI vs BIOS limitations ===<br />
<br />
Most of the linux bootloaders installed for one firmware type cannot launch or chainload bootloaders of other firmware type. That is, if Arch is installed in UEFI-GPT or UEFI-MBR mode in one disk and Windows is installed in BIOS-MBR mode in another disk, the UEFI bootloader used by Arch cannot chainload the BIOS installed Windows in the other disk. Similarly if Arch is installed in BIOS-MBR or BIOS-GPT mode in one disk and Windows is installed in UEFI-GPT in another disk , the BIOS bootloader used by Arch cannot chainload UEFI installed Windows in the other disk. <br />
<br />
The only exceptions to this are grub(2) in Apple Macs in which EFI installed grub(2) can boot BIOS installed OS via '''appleloader''' command (does not work in non-Apple systems), and rEFInd which technically supports booting legacy BIOS OS from UEFI systems, but [http://rodsbooks.com/refind/using.html#legacy does not always work in non-Apple UEFI systems] as per its author Rod Smith. <br />
<br />
However if Arch is installed in BIOS-GPT in one disk and Windows is installed in BIOS-MBR mode in another disk, then the BIOS bootloader used by Arch CAN boot the Windows in the other disk, if the bootloader itself has the ability to chainload from another disk. <br />
<br />
{{Note|If Arch and Windows are dual-booting from same disk, then Arch SHOULD follow the same firmware boot mode and partitioning combination used by the installed Windows in the disk.}}<br />
<br />
=== UEFI Secure Boot ===<br />
<br />
All pre-installed Windows 8/8.1 systems by default boot in UEFI-GPT mode and have UEFI Secure Boot enabled by default (which can be manually disabled by the user) and Legacy BIOS support (CSM) disabled by default (which can be manually enabled by the user, if the firmware supports it) in the firmware. This is mandated by Microsoft for all OEM pre-installed systems.<br />
<br />
Arch Linux install media currently supports Secure Boot but it requires some manual steps by the user to [[UEFI#Secure_Boot|setup the HashTool while booting]]. There it is advisable to disable UEFI Secure Boot in the firmware setup before attempting to boot Arch Linux. Windows 8/8.1 SHOULD continue to boot fine even if Secure boot is disabled. <br />
<br />
The only issue with regards to disabling UEFI Secure Boot support is that it requires physical access to the system to disable secure boot option in the firmware setup, as Microsoft has explicitly forbidden presence of any method to remotely or programmatically (from within OS) disable secure boot in all Windows 8/8.1 pre-installed systems<br />
<br />
=== Fast Start-Up ===<br />
<br />
Fast Start-Up is a feature in Windows 8 that hibernates the computer rather than actually shutting it down to speed up boot times. Your system can lose data if Windows hibernates and you dual boot into another OS and make changes to files. Even if you do not intend to share filesystems, the EFI System Partition is likely to be damaged on an EFI system. Therefore, you should disable Fast Startup, as described [http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html here], before you install Linux on any computer that uses Windows 8.<br />
<br />
{{Pkg|ntfs-3g}} added a [http://sourceforge.net/p/ntfs-3g/ntfs-3g/ci/559270a8f67c77a7ce51246c23d2b2837bcff0c9/ safe-guard] to prevent read-write mounting of hibernated disks, but the NTFS driver within the Linux kernel has no such safeguard.<br />
<br />
=== Windows filenames limitations ===<br />
<br />
Windows is limited to filepaths being shorter than [http://blogs.msdn.com/b/bclteam/archive/2007/02/13/long-paths-in-net-part-1-of-3-kim-hamilton.aspx 260 characters].<br />
<br />
Windows also puts [http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#naming_conventions certain characters off limits] in filenames for reasons that run all the way back to DOS:<br />
<br />
* < (less than)<br />
* > (greater than)<br />
* : (colon)<br />
* " (double quote)<br />
* / (forward slash)<br />
* \ (backslash)<br />
* | (vertical bar or pipe)<br />
* ? (question mark)<br />
* * (asterisk)<br />
<br />
These are limitations of Windows and not NTFS: any other OS using the NTFS partition will be fine. Windows will fail to detect these files and running {{ic|chkdsk}} will most likely cause them to be deleted. This can lead to potential data-loss.<br />
<br />
'''NTFS-3G''' applies Windows restrictions to new file names through the [http://www.tuxera.com/community/ntfs-3g-manual/#4 windows_filenames] option (see [[fstab]]).<br />
<br />
== Installation ==<br />
<br />
The recommended way to setup a Linux/Windows dual booting system is to first install Windows, only using part of the disk for its partitions. When you have finished the Windows setup, boot into the Linux install environment where you can create additional partitions for Linux while leaving the existing Windows partitions untouched. The Windows installation will create the EFI System Partition which can be used by your Linux bootloader.<br />
<br />
=== BIOS systems ===<br />
<br />
==== Using a Linux boot loader ====<br />
<br />
You may use [[GRUB#Dual-booting|GRUB]] or [[Syslinux#Chainloading|Syslinux]].<br />
<br />
==== Using Windows boot loader ====<br />
<br />
With this setup the Windows bootloader loads GRUB which then boots Arch. <br />
<br />
===== Windows Vista/7/8/8.1 boot loader =====<br />
<br />
The following section contains excerpts from http://www.iceflatline.com/2009/09/how-to-dual-boot-windows-7-and-linux-using-bcdedit/.<br />
<br />
{{Accuracy|Using ex3 formatted /boot partition, windows bootloader works just fine}}<br />
<br />
In order to have the Windows boot loader see the Linux partition, one of the Linux partitions created needs to be FAT32 (in this case, {{ic|/dev/sda3}}). The remainder of the setup is similar to a typical installation. Some documents state that the partition being loaded by the Windows boot loader must be a primary partition but I have used this without problem on an extended partition.<br />
<br />
* When installing the GRUB boot loader, install it on your {{ic|/boot}} partition rather than the MBR. {{Note|For instance, my {{ic|/boot}} partition is {{ic|/dev/sda5}}. So I installed GRUB at {{ic|/dev/sda5}} instead of {{ic|/dev/sda}}. For help on doing this, see [[GRUB#Install to partition or partitionless disk]]}}<br />
<br />
* Under Linux make a copy of the boot info by typing the following at the command shell:<br />
<br />
my_windows_part=/dev/sda3<br />
my_boot_part=/dev/sda5<br />
mkdir /media/win<br />
mount $my_windows_part /media/win<br />
dd if=$my_boot_part of=/media/win/linux.bin bs=512 count=1<br />
<br />
* Boot to Windows and open up and you should be able to see the FAT32 partition. Copy the linux.bin file to {{ic|C:\}}. Now run '''cmd''' with administrator privileges (navigate to ''Start > All Programs > Accessories'', right-click on ''Command Prompt'' and select ''Run as administrator''):<br />
<br />
bcdedit /create /d “Linux” /application BOOTSECTOR<br />
<br />
* BCDEdit will return an alphanumeric identifier for this entry that I will refer to as {ID} in the remaining steps. You will need to replace {ID} by the actual returned identifier. An example of {ID} is {d7294d4e-9837-11de-99ac-f3f3a79e3e93}. <br />
<br />
bcdedit /set {ID} device partition=c:<br />
bcdedit /set {ID} path \linux.bin<br />
bcdedit /displayorder {ID} /addlast<br />
bcdedit /timeout 30<br />
<br />
Reboot and enjoy. In my case I'm using the Windows boot loader so that I can map my Dell Precision M4500's second power button to boot Linux instead of Windows.<br />
<br />
===== Windows 2000/XP boot loader =====<br />
<br />
For information on this method see http://www.geocities.com/epark/linux/grub-w2k-HOWTO.html. I do not believe there are any distinct advantages of this method over the Linux boot loader; you will still need a {{ic|/boot}} partition, and this one is arguably more difficult to set up.<br />
<br />
=== UEFI systems ===<br />
<br />
Both [[systemd-boot]] and [[rEFInd]] autodetect '''Windows Boot Manager''' {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} and show it in their boot menu, so there is no manual config required.<br />
<br />
For [[GRUB]] follow [[GRUB#Windows installed in UEFI-GPT Mode menu entry]].<br />
<br />
Syslinux (as of version 6.02 and 6.03-pre9) and ELILO do not support chainloading other EFI applications, so they cannot be used to chainload {{ic|\EFI\Microsoft\Boot\bootmgfw.efi}} .<br />
<br />
Computers that come with newer versions of Windows often have [[UEFI#Secure_Boot|secure boot]] enabled. You will need to take extra steps to either disable secure boot or to make your installation media compatible with secure boot.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Couldn't create a new partition or locate an existing one ====<br />
<br />
The usb-stick for installing Windows 8.1 seems to need a MBR partition table (not GPT), otherwise the installation gets confused and prints something like "Couldn't create a new partition or locate an existing one", although the partitions were created.<br />
<br />
== Time standard ==<br />
<br />
* Recommended: Set both Arch Linux and Windows to use UTC, following [[Time#UTC in Windows]]. Also, be sure to prevent Windows from synchronizing the time on-line, because the hardware clock will default back to ''localtime''.<br />
<br />
* Not recommended: Set Arch Linux to ''localtime'' and disable any time-related services, like [[NTPd]] . This will let Windows take care of hardware clock corrections and you will need to remember to boot into Windows at least two times a year (in Spring and Autumn) when [[Wikipedia:Daylight saving time|DST]] kicks in. So please do not ask on the forums why the clock is one hour behind or ahead if you usually go for days or weeks without booting into Windows.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=140049 Booting Windows from a desktop shortcut]<br />
<br />
<br />
== Use Cases ==<br />
<br />
=== MBR-BIOS HP Laptop ===<br />
<br />
* Special notes<br />
** This method uses parted exclusively for all partioning<br />
** Its assumed (if your bios permits) that you have it set to IDE or SATA/RAID, and not on AHCI mode<br />
** In this scensario, the user additionally saved 1GB free space at the beginning of drive for later use (Later conversion to boot bios, GPT partition table, grub installations, anything really)<br />
<br />
Using parted, create the following partitions, adjusting start points where you see fit. If you dont want the free space, adjust the first start point to 1000000B (success is not guaranteed on any partition adjustments, for this scenario)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398159ZFS/Virtual disks2015-09-03T23:22:37Z<p>Wolfdogg: /* Troubleshooting */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
Note, this simple but important command is missing frequently from other articles on the subject, so its worth mention.<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|$ zfs list -t snapshot<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
If your pool was named <pool><br />
# zpool import pool<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export pool<br />
# zfs import pool</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398158ZFS/Virtual disks2015-09-03T23:18:53Z<p>Wolfdogg: /* Listing Snapshots */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
Note, this simple but important command is missing frequently from other articles on the subject, so its worth mention.<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|$ zfs list -t snapshot<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398157ZFS/Virtual disks2015-09-03T23:14:53Z<p>Wolfdogg: </p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398156ZFS/Virtual disks2015-09-03T23:12:48Z<p>Wolfdogg: /* Troubleshooting */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398155ZFS/Virtual disks2015-09-03T23:12:37Z<p>Wolfdogg: /* Linear Span */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398154ZFS/Virtual disks2015-09-03T23:12:14Z<p>Wolfdogg: /* Deleting Snapshots */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san<br />
<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
== Troubleshooting ==</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398153ZFS/Virtual disks2015-09-03T23:11:13Z<p>Wolfdogg: /* Creating and Destroying Zpools */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|<br />
# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san<br />
<br />
<br />
== Creating and Destroying Datasets ==<br />
<br />
An example creating child datasets and using compression:<br />
* create the datasets<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
* now list the datasets (this was a linear span)<br />
{{hc|$ zfs list|<nowiki><br />
<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 3.31T 2.04T 2.85M /san<br />
san/vault 3.31T 2.04T 136K /san/vault<br />
san/vault/falcon 171G 2.04T 100K /san/vault/falcon<br />
san/vault/falcon/snapshots 171G 2.04T 171G /san/vault/falcon/snapshots<br />
san/vault/falcon/version 160K 2.04T 96K /san/vault/falcon/version<br />
san/vault/gyrfalcon 564K 2.04T 132K /san/vault/gyrfalcon<br />
san/vault/gyrfalcon/snapshots 184K 2.04T 120K /san/vault/gyrfalcon/snapshots<br />
san/vault/gyrfalcon/version 184K 2.04T 120K /san/vault/gyrfalcon/version<br />
san/vault/osprey 170G 2.04T 170G /san/vault/osprey<br />
san/vault/osprey/snapshots 24.2M 2.04T 24.2M /san/vault/osprey/snapshots<br />
san/vault/osprey/version 120K 2.04T 120K /san/vault/osprey/version<br />
san/vault/redtail 2.98T 2.04T 17.1M /san/vault/redtail<br />
san/vault/redtail/c 779M 2.04T 6.37M /san/vault/redtail/c<br />
san/vault/redtail/c/AMD 4.44M 2.04T 4.24M /san/vault/redtail/c/AMD<br />
san/vault/redtail/c/Users 700M 2.04T 482M /san/vault/redtail/c/Users<br />
san/vault/redtail/d 1.59T 2.04T 124K /san/vault/redtail/d<br />
san/vault/redtail/d/UserFiles 1.59T 2.04T 1.59T /san/vault/redtail/d/UserFiles<br />
san/vault/redtail/d/archive 283M 2.04T 283M /san/vault/redtail/d/archive<br />
san/vault/redtail/e 1.34T 2.04T 124K /san/vault/redtail/e<br />
san/vault/redtail/e/PublicArchive 1.34T 2.04T 1.34T /san/vault/redtail/e/PublicArchive<br />
san/vault/redtail/e/archive 283M 2.04T 283M /san/vault/redtail/e/archive<br />
san/vault/redtail/snapshots 184K 2.04T 120K /san/vault/redtail/snapshots<br />
san/vault/redtail/version 44.3G 2.04T 43.9G /san/vault/redtail/version<br />
</nowiki>}}<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398151ZFS/Virtual disks2015-09-03T22:22:48Z<p>Wolfdogg: /* Linear Span */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
<br />
Create any child datasets:<br />
<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
<br />
Note, there is a huge advantage(file deletion) for making a 3 level dataset. If you have large amounts of data, by separating by datasets, its easier to destroy a dataset than to try and wait for recursive file removal to complete. <br />
<br />
<br />
If your system is not configured to load the zfs pool upon boot, or for whatever reason you want to manually remove and add back the pool, or if you have lost your pool completely, a convenient way is to use import/export. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398147ZFS/Virtual disks2015-09-03T21:13:44Z<p>Wolfdogg: /* Creating and Destroying Zpools */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
<br />
=== Linear Span ===<br />
<br />
This setup is for a JBOD, good for 3 or less drives normally, where space is still a concern and your not ready to move to full features of ZFS yet because of it. RaidZ will be your better bet once you achieve enough space to satisfy, since this setup is NOT taking advantage of the full features of ZFS, but has its roots safely set in a beginning array that will suffice for years until you build up your hard drive collection. <br />
<br />
Assemble the Linear Span:<br />
# zpool create san /dev/sdd /dev/sde /dev/sdf<br />
<br />
{{hc|# zpool status san|<nowiki><br />
pool: san<br />
state: ONLINE<br />
scan: scrub repaired 0 in 4h22m with 0 errors on Fri Aug 28 23:52:55 2015<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
sde ONLINE 0 0 0<br />
sdd ONLINE 0 0 0<br />
sdf ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
<br />
Create child datasets:<br />
<br />
# zfs create -p -o compression=on san/vault/falcon/snapshots<br />
# zfs create -o compression=on san/vault/falcon/version<br />
# zfs create -p -o compression=on san/vault/redtail/c/Users<br />
<br />
System is now ready to be used.<br />
<br />
<br />
On boot import your pool if you didnt configure it to be available at boot. <br />
<br />
# zpool import san<br />
<br />
If you have any problems accessing your pool at any time, try export and reimport. <br />
<br />
# zfs export san<br />
# zfs import san<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398146ZFS/Virtual disks2015-09-03T20:54:06Z<p>Wolfdogg: /* Listing Snapshots */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots on your system, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}}<br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=398145ZFS/Virtual disks2015-09-03T20:53:40Z<p>Wolfdogg: adding a list snapshots command, which is not so easily found on oracle docs, and wasnt explictly mentioned in this wiki either.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID1 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
<br />
=== Listing Snapshots ===<br />
<br />
To list any snapshots, run the following command<br />
{{bc|<br />
$ zfs list -t snapshot<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san@initial_working 896K - 2.85M -<br />
san/vault@initial_working 80K - 152K -<br />
san/vault/falcon@initial_working 64K - 100K -<br />
san/vault/falcon/snapshots@initial_working 80K - 96K -<br />
san/vault/falcon/version@initial_working 64K - 96K -<br />
san/vault/gyrfalcon@initial_working 64K - 132K -<br />
san/vault/gyrfalcon/snapshots@initial_working 64K - 120K -<br />
san/vault/gyrfalcon/version@initial_working 64K - 120K -<br />
san/vault/osprey@initial_working 96K - 144K -<br />
san/vault/osprey/snapshots@initial_working 0 - 24.2M -<br />
san/vault/osprey/version@initial_working 0 - 120K -<br />
san/vault/redtail@initial_working 132K - 17.2M -<br />
san/vault/redtail/c@initial_working 67.3M - 72.9M -<br />
san/vault/redtail/c/AMD@initial_working 204K - 4.24M -<br />
san/vault/redtail/c/Users@initial_working 218M - 694M -<br />
san/vault/redtail/d@initial_working 76K - 132K -<br />
san/vault/redtail/d/UserFiles@initial_working 208M - 1.53T -<br />
san/vault/redtail/d/archive@initial_working 64K - 283M -<br />
san/vault/redtail/e@initial_working 76K - 132K -<br />
san/vault/redtail/e/PublicArchive@initial_working 31.9M - 1.34T -<br />
san/vault/redtail/e/archive@initial_working 64K - 283M -<br />
san/vault/redtail/snapshots@initial_working 64K - 120K -<br />
san/vault/redtail/version@initial_working 375M - 44.3G -<br />
}} <br />
<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Jenkins&diff=371891Talk:Jenkins2015-04-30T06:16:36Z<p>Wolfdogg: </p>
<hr />
<div>1) can you give me an example of what you mean by shell alias? <br />
2) I want to mention the value of this wiki, its as rare as ZFS help around here, and a great tool. So anything would help, if we can keep. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 06:16, 30 April 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Talk:Jenkins&diff=371890Talk:Jenkins2015-04-30T06:16:06Z<p>Wolfdogg: Created page with "can you give me an example of what you mean by shell alias? I want to mention the value of this wiki, its as rare as ZFS help around here, and a great tool. So anything woul..."</p>
<hr />
<div>can you give me an example of what you mean by shell alias? I want to mention the value of this wiki, its as rare as ZFS help around here, and a great tool. So anything would help, if we can keep. --[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 06:16, 30 April 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Jenkins&diff=371889Jenkins2015-04-30T06:13:37Z<p>Wolfdogg: had user:group backwards</p>
<hr />
<div>[[Category:Package development]]<br />
[[Category:Web Server]]<br />
{{Stub|99% of this article is about creating a launcher script that should probably better be contributed upstream, although server applications shouldn't have to be run manually often, and a shell alias would do the job in a simpler way for testing.}}<br />
<br />
[https://jenkins-ci.org/ Jenkins] is a continuous integration server: basically it is a Java application that helps manage web applications. Note that applications do not have to contain any Java themselves, so you can benefit from it for PHP applications, Node.js, etc.<br />
<br />
== Installing ==<br />
<br />
[[Install]] {{Pkg|jenkins}} from the [[official repositories]].<br />
<br />
== Configuring == <br />
<br />
The configuration file is located in {{ic|/etc/conf.d/jenkins}}, open this file and look it over:<br />
<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
<br />
Notice the location of the war file. Change to this directory, then run jenkins from there.<br />
<br />
=== Creating automated script to start jenkins ===<br />
<br />
Create a new file with the following content:<br />
<br />
{{hc|/usr/local/bin/startjenkins|<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
}}<br />
<br />
You can also use {{ic|/usr/local/share}}, depending on what is already in your path, you can run {{ic|echo $PATH}} to find this information.<br />
<br />
Change its permissions:<br />
<br />
# chown yourusername:''users'' /usr/local/bin/startjenkins<br />
# chmod 655 /usr/local/bin/startjenkins<br />
<br />
== Running == <br />
<br />
If you have added the startjenkins file to a directory that is in your path, all you need to do to run it with the following command:<br />
<br />
$ startjenkins<br />
<br />
Otherwise you will need to run it directly from its actual location:<br />
<br />
$ java -jar /usr/share/java/jenkins/jenkins.war<br />
<br />
== Accessing ==<br />
<br />
You can now log into your jenkins at {{ic|<nowiki>http://localhost:8080</nowiki>}}.</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=BIND&diff=371141BIND2015-04-25T20:16:25Z<p>Wolfdogg: /* 3. Setting this to be your default DNS server */</p>
<hr />
<div>[[Category:Domain Name System]]<br />
[[de:BIND]]<br />
[[fr:BIND]]<br />
[[ja:BIND]]<br />
[[zh-CN:BIND]]<br />
Berkeley Internet Name Daemon (BIND) is the reference implementation of the Domain Name System (DNS) protocols.<br />
<br />
== Installation ==<br />
These few steps show you how to install BIND and set it up as a local caching-only server.<br />
<br />
[[pacman|Install]] the {{Pkg|bind}} package which can be found in the [[official repositories]].<br />
<br />
Optionally edit {{ic|/etc/named.conf}} and add this under the options section, to only allow connections from the localhost:<br />
listen-on { 127.0.0.1; };<br />
<br />
Edit {{ic|/etc/resolv.conf}} to use the local DNS server:<br />
nameserver 127.0.0.1<br />
<br />
[[Daemon#Managing daemons|Start]] the '''named''' daemon.<br />
<br />
== A configuration template for running a domain ==<br />
This is a simple tutorial in howto setup a simple home network DNS-server with bind. In our example we use "domain.tld" as our domain.<br />
<br />
For a more elaborate example see [http://www.howtoforge.com/two_in_one_dns_bind9_views Two-in-one DNS server with BIND9].<br />
<br />
=== 1. Creating a zonefile ===<br />
# nano /var/named/domain.tld.zone<br />
<br />
$TTL 7200<br />
; domain.tld<br />
@ IN SOA ns01.domain.tld. postmaster.domain.tld. (<br />
2007011601 ; Serial<br />
28800 ; Refresh<br />
1800 ; Retry<br />
604800 ; Expire - 1 week<br />
86400 ) ; Minimum<br />
IN NS ns01<br />
IN NS ns02<br />
ns01 IN A 0.0.0.0<br />
ns02 IN A 0.0.0.0<br />
localhost IN A 127.0.0.1<br />
@ IN MX 10 mail<br />
imap IN CNAME mail<br />
smtp IN CNAME mail<br />
@ IN A 0.0.0.0<br />
www IN A 0.0.0.0<br />
mail IN A 0.0.0.0<br />
@ IN TXT "v=spf1 mx"<br />
<br />
$TTL defines the default time-to-live in seconds for all record types. In this example it is 2 hours.<br />
<br />
'''Serial must be incremented manually before restarting named every time you change a resource record for the zone.''' If you forget to do it slaves will not re-transfer the zone: they only do it if the serial is greater than that of the last time they transferred the zone.<br />
<br />
=== 2. Configuring master server ===<br />
Add your zone to {{ic|/etc/named.conf}}:<br />
zone "domain.tld" IN {<br />
type master;<br />
file "domain.tld.zone";<br />
allow-update { none; };<br />
notify no;<br />
};<br />
<br />
Start/restart the daemon and you are done.<br />
systemctl enable named.service<br />
systemctl start named.service<br />
<br />
or reload the configuration files <br />
<br />
systemctl reload named<br />
<br />
The latter option will keep your nameserver available while still allowing the configuration change.<br />
<br />
=== 3. Setting this to be your default DNS server ===<br />
<br />
If you are running your own DNS server, you might as well use it for all DNS lookups. This will require the ability to do ''recursive'' lookups. In order to prevent [https://www.us-cert.gov/ncas/alerts/TA13-088A DNS Amplification Attacks], recursion is turned off by default for most resolvers. The default Arch {{ic|/etc/named.conf}} file allows for recursion only on the loopback interface:<br />
<br />
allow-recursion { 127.0.0.1; };<br />
<br />
So to facilitate general DNS lookups from your host, your {{ic|/etc/resolv.conf}} file must include this line:<br />
<br />
nameserver 127.0.0.1<br />
<br />
Since {{ic|/etc/resolv.conf}} is a generated file, edit {{ic|/etc/resolvconf.conf}} and uncomment the<br />
# name_servers=127.0.0.1<br />
line. {{ic|/etc/resolvconf.conf}} will consequently be set up properly on subsequent reboots.<br />
<br />
If you want to provide name service for your local network; e.g. 192.168.0, you must add the appropriate range of IP addresses to {{ic|/etc/named.conf}}:<br />
<br />
allow-recursion { 192.168.0.0/24; 127.0.0.1; };<br />
<br />
== BIND as simple DNS forwarder ==<br />
If you have problems with, for example, VPN connections, they can sometimes be solved by setting-up a forwarding DNS server. This is very simple with BIND. Add these lines to {{ic|/etc/named.conf}} in either the global options section or in a specific zone, and change IP address according to your setup.<br />
<br />
options {<br />
listen-on { 192.168.66.1; };<br />
forwarders { 8.8.8.8; 8.8.4.4; };<br />
};<br />
<br />
Don't forget to restart the service!<br />
systemctl restart named.service<br />
<br />
== Running BIND in a chrooted environment ==<br />
Running in a [[chroot]] environment is not required but improves security. See [[BIND (chroot)]] for how to do this.<br />
<br />
== Configuring BIND to serve DNSSEC signed zones ==<br />
See [[DNSSEC#BIND (serving signed DNS zones)]]<br />
<br />
== Automatically listen on new interfaces ==<br />
<br />
By default bind scan for new interfaces and stop listening on interfaces which no longer exist every hours. You can tune this value by adding :<br />
interface-interval <rescan-timeout-in-minutes>;<br />
parameter into {{ic|named.conf}} options section. Max value is 28 days. (40320 min) <br><br />
You can disable this feature by setting its value to 0.<br />
<br />
Then restart the service.<br />
<br />
==See also==<br />
*[[BIND (chroot)]]<br />
<br />
== BIND Resources ==<br />
* [http://www.reedmedia.net/books/bind-dns/ BIND 9 DNS Administration Reference Book]<br />
* [http://shop.oreilly.com/product/9780596100575.do DNS and BIND by Cricket Liu and Paul Albitz]<br />
* [http://www.netwidget.net/books/apress/dns/intro.html Pro DNS and BIND]<br />
* [http://www.isc.org/ Internet Systems Consortium, Inc. (ISC)]<br />
* [http://www.menandmice.com/knowledgehub/dnsglossary DNS Glossary]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=BIND&diff=371140BIND2015-04-25T20:16:01Z<p>Wolfdogg: /* 2. Configuring master server */</p>
<hr />
<div>[[Category:Domain Name System]]<br />
[[de:BIND]]<br />
[[fr:BIND]]<br />
[[ja:BIND]]<br />
[[zh-CN:BIND]]<br />
Berkeley Internet Name Daemon (BIND) is the reference implementation of the Domain Name System (DNS) protocols.<br />
<br />
== Installation ==<br />
These few steps show you how to install BIND and set it up as a local caching-only server.<br />
<br />
[[pacman|Install]] the {{Pkg|bind}} package which can be found in the [[official repositories]].<br />
<br />
Optionally edit {{ic|/etc/named.conf}} and add this under the options section, to only allow connections from the localhost:<br />
listen-on { 127.0.0.1; };<br />
<br />
Edit {{ic|/etc/resolv.conf}} to use the local DNS server:<br />
nameserver 127.0.0.1<br />
<br />
[[Daemon#Managing daemons|Start]] the '''named''' daemon.<br />
<br />
== A configuration template for running a domain ==<br />
This is a simple tutorial in howto setup a simple home network DNS-server with bind. In our example we use "domain.tld" as our domain.<br />
<br />
For a more elaborate example see [http://www.howtoforge.com/two_in_one_dns_bind9_views Two-in-one DNS server with BIND9].<br />
<br />
=== 1. Creating a zonefile ===<br />
# nano /var/named/domain.tld.zone<br />
<br />
$TTL 7200<br />
; domain.tld<br />
@ IN SOA ns01.domain.tld. postmaster.domain.tld. (<br />
2007011601 ; Serial<br />
28800 ; Refresh<br />
1800 ; Retry<br />
604800 ; Expire - 1 week<br />
86400 ) ; Minimum<br />
IN NS ns01<br />
IN NS ns02<br />
ns01 IN A 0.0.0.0<br />
ns02 IN A 0.0.0.0<br />
localhost IN A 127.0.0.1<br />
@ IN MX 10 mail<br />
imap IN CNAME mail<br />
smtp IN CNAME mail<br />
@ IN A 0.0.0.0<br />
www IN A 0.0.0.0<br />
mail IN A 0.0.0.0<br />
@ IN TXT "v=spf1 mx"<br />
<br />
$TTL defines the default time-to-live in seconds for all record types. In this example it is 2 hours.<br />
<br />
'''Serial must be incremented manually before restarting named every time you change a resource record for the zone.''' If you forget to do it slaves will not re-transfer the zone: they only do it if the serial is greater than that of the last time they transferred the zone.<br />
<br />
=== 2. Configuring master server ===<br />
Add your zone to {{ic|/etc/named.conf}}:<br />
zone "domain.tld" IN {<br />
type master;<br />
file "domain.tld.zone";<br />
allow-update { none; };<br />
notify no;<br />
};<br />
<br />
Start/restart the daemon and you are done.<br />
systemctl enable named.service<br />
systemctl start named.service<br />
<br />
or reload the configuration files <br />
<br />
systemctl reload named<br />
<br />
The latter option will keep your nameserver available while still allowing the configuration change.<br />
<br />
=== 3. Setting this to be your default DNS server ===<br />
<br />
If you are running your own DNS server, you might as well use it for all DNS lookups. This will require the ability to do ''recursive'' lookups. In order to prevent [https://www.us-cert.gov/ncas/alerts/TA13-088A DNS Amplification Attacks], recursion is turned off by default for most resolvers. The default Arch {{ic|/etc/named.conf}} file allows for recursion only on the loopback interface:<br />
<br />
allow-recursion { 127.0.0.1; };<br />
<br />
So to facilitate general DNS lookups from your host, your {{ic|/etc/resolv.conf}} file must include this line:<br />
<br />
nameserver 127.0.0.1<br />
<br />
Since {{ic|/etc/resolv.conf}} is a generated file, edit {{ic|/etc/resolvconf.conf}} and uncomment the<br />
# name_servers=127.0.0.1<br />
line. {{ic|/etc/resolvconf.conf}} will consequently be set up properly on subsequent reboots.<br />
<br />
If you want to provide name service for your local network; e.g. 192.168.0, you must add the appropriate range of IP addresses to {{ic|/etc/named.conf}}:<br />
<br />
allow-recursion { 192.168.0.0/24; 127.0.0.1; };<br />
<br />
enable and start the service<br />
systemctl enable named.service<br />
systemctl start named.service<br />
== BIND as simple DNS forwarder ==<br />
If you have problems with, for example, VPN connections, they can sometimes be solved by setting-up a forwarding DNS server. This is very simple with BIND. Add these lines to {{ic|/etc/named.conf}} in either the global options section or in a specific zone, and change IP address according to your setup.<br />
<br />
options {<br />
listen-on { 192.168.66.1; };<br />
forwarders { 8.8.8.8; 8.8.4.4; };<br />
};<br />
<br />
Don't forget to restart the service!<br />
systemctl restart named.service<br />
<br />
== Running BIND in a chrooted environment ==<br />
Running in a [[chroot]] environment is not required but improves security. See [[BIND (chroot)]] for how to do this.<br />
<br />
== Configuring BIND to serve DNSSEC signed zones ==<br />
See [[DNSSEC#BIND (serving signed DNS zones)]]<br />
<br />
== Automatically listen on new interfaces ==<br />
<br />
By default bind scan for new interfaces and stop listening on interfaces which no longer exist every hours. You can tune this value by adding :<br />
interface-interval <rescan-timeout-in-minutes>;<br />
parameter into {{ic|named.conf}} options section. Max value is 28 days. (40320 min) <br><br />
You can disable this feature by setting its value to 0.<br />
<br />
Then restart the service.<br />
<br />
==See also==<br />
*[[BIND (chroot)]]<br />
<br />
== BIND Resources ==<br />
* [http://www.reedmedia.net/books/bind-dns/ BIND 9 DNS Administration Reference Book]<br />
* [http://shop.oreilly.com/product/9780596100575.do DNS and BIND by Cricket Liu and Paul Albitz]<br />
* [http://www.netwidget.net/books/apress/dns/intro.html Pro DNS and BIND]<br />
* [http://www.isc.org/ Internet Systems Consortium, Inc. (ISC)]<br />
* [http://www.menandmice.com/knowledgehub/dnsglossary DNS Glossary]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=BIND&diff=371139BIND2015-04-25T20:10:41Z<p>Wolfdogg: added service start instructs.</p>
<hr />
<div>[[Category:Domain Name System]]<br />
[[de:BIND]]<br />
[[fr:BIND]]<br />
[[ja:BIND]]<br />
[[zh-CN:BIND]]<br />
Berkeley Internet Name Daemon (BIND) is the reference implementation of the Domain Name System (DNS) protocols.<br />
<br />
== Installation ==<br />
These few steps show you how to install BIND and set it up as a local caching-only server.<br />
<br />
[[pacman|Install]] the {{Pkg|bind}} package which can be found in the [[official repositories]].<br />
<br />
Optionally edit {{ic|/etc/named.conf}} and add this under the options section, to only allow connections from the localhost:<br />
listen-on { 127.0.0.1; };<br />
<br />
Edit {{ic|/etc/resolv.conf}} to use the local DNS server:<br />
nameserver 127.0.0.1<br />
<br />
[[Daemon#Managing daemons|Start]] the '''named''' daemon.<br />
<br />
== A configuration template for running a domain ==<br />
This is a simple tutorial in howto setup a simple home network DNS-server with bind. In our example we use "domain.tld" as our domain.<br />
<br />
For a more elaborate example see [http://www.howtoforge.com/two_in_one_dns_bind9_views Two-in-one DNS server with BIND9].<br />
<br />
=== 1. Creating a zonefile ===<br />
# nano /var/named/domain.tld.zone<br />
<br />
$TTL 7200<br />
; domain.tld<br />
@ IN SOA ns01.domain.tld. postmaster.domain.tld. (<br />
2007011601 ; Serial<br />
28800 ; Refresh<br />
1800 ; Retry<br />
604800 ; Expire - 1 week<br />
86400 ) ; Minimum<br />
IN NS ns01<br />
IN NS ns02<br />
ns01 IN A 0.0.0.0<br />
ns02 IN A 0.0.0.0<br />
localhost IN A 127.0.0.1<br />
@ IN MX 10 mail<br />
imap IN CNAME mail<br />
smtp IN CNAME mail<br />
@ IN A 0.0.0.0<br />
www IN A 0.0.0.0<br />
mail IN A 0.0.0.0<br />
@ IN TXT "v=spf1 mx"<br />
<br />
$TTL defines the default time-to-live in seconds for all record types. In this example it is 2 hours.<br />
<br />
'''Serial must be incremented manually before restarting named every time you change a resource record for the zone.''' If you forget to do it slaves will not re-transfer the zone: they only do it if the serial is greater than that of the last time they transferred the zone.<br />
<br />
=== 2. Configuring master server ===<br />
Add your zone to {{ic|/etc/named.conf}}:<br />
zone "domain.tld" IN {<br />
type master;<br />
file "domain.tld.zone";<br />
allow-update { none; };<br />
notify no;<br />
};<br />
<br />
Restart the daemon ({{ic|systemctl restart named}}) or reload the configuration files ({{ic|systemctl reload named}}) and you are done. The latter option will keep your nameserver available while still allowing the configuration change.<br />
<br />
=== 3. Setting this to be your default DNS server ===<br />
<br />
If you are running your own DNS server, you might as well use it for all DNS lookups. This will require the ability to do ''recursive'' lookups. In order to prevent [https://www.us-cert.gov/ncas/alerts/TA13-088A DNS Amplification Attacks], recursion is turned off by default for most resolvers. The default Arch {{ic|/etc/named.conf}} file allows for recursion only on the loopback interface:<br />
<br />
allow-recursion { 127.0.0.1; };<br />
<br />
So to facilitate general DNS lookups from your host, your {{ic|/etc/resolv.conf}} file must include this line:<br />
<br />
nameserver 127.0.0.1<br />
<br />
Since {{ic|/etc/resolv.conf}} is a generated file, edit {{ic|/etc/resolvconf.conf}} and uncomment the<br />
# name_servers=127.0.0.1<br />
line. {{ic|/etc/resolvconf.conf}} will consequently be set up properly on subsequent reboots.<br />
<br />
If you want to provide name service for your local network; e.g. 192.168.0, you must add the appropriate range of IP addresses to {{ic|/etc/named.conf}}:<br />
<br />
allow-recursion { 192.168.0.0/24; 127.0.0.1; };<br />
<br />
enable and start the service<br />
systemctl enable named.service<br />
systemctl start named.service<br />
== BIND as simple DNS forwarder ==<br />
If you have problems with, for example, VPN connections, they can sometimes be solved by setting-up a forwarding DNS server. This is very simple with BIND. Add these lines to {{ic|/etc/named.conf}} in either the global options section or in a specific zone, and change IP address according to your setup.<br />
<br />
options {<br />
listen-on { 192.168.66.1; };<br />
forwarders { 8.8.8.8; 8.8.4.4; };<br />
};<br />
<br />
Don't forget to restart the service!<br />
systemctl restart named.service<br />
<br />
== Running BIND in a chrooted environment ==<br />
Running in a [[chroot]] environment is not required but improves security. See [[BIND (chroot)]] for how to do this.<br />
<br />
== Configuring BIND to serve DNSSEC signed zones ==<br />
See [[DNSSEC#BIND (serving signed DNS zones)]]<br />
<br />
== Automatically listen on new interfaces ==<br />
<br />
By default bind scan for new interfaces and stop listening on interfaces which no longer exist every hours. You can tune this value by adding :<br />
interface-interval <rescan-timeout-in-minutes>;<br />
parameter into {{ic|named.conf}} options section. Max value is 28 days. (40320 min) <br><br />
You can disable this feature by setting its value to 0.<br />
<br />
Then restart the service.<br />
<br />
==See also==<br />
*[[BIND (chroot)]]<br />
<br />
== BIND Resources ==<br />
* [http://www.reedmedia.net/books/bind-dns/ BIND 9 DNS Administration Reference Book]<br />
* [http://shop.oreilly.com/product/9780596100575.do DNS and BIND by Cricket Liu and Paul Albitz]<br />
* [http://www.netwidget.net/books/apress/dns/intro.html Pro DNS and BIND]<br />
* [http://www.isc.org/ Internet Systems Consortium, Inc. (ISC)]<br />
* [http://www.menandmice.com/knowledgehub/dnsglossary DNS Glossary]</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=370363User talk:Wolfdogg2015-04-20T17:58:46Z<p>Wolfdogg: /* samba specific setup */ adding inst. for samba ldap</p>
<hr />
<div>= Samba from OpenLDAP on Arch = <br />
== samba specific setup ==<br />
<br />
download smbldap-tools from the aur https://aur.archlinux.org/packages/sm/smbldap-tools/smbldap-tools.tar.gz<br />
make pkg, get any dependencies you need to get this process completed<br />
<br />
= Jenkins =<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
<br />
=== create an automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
run it this way now<br />
$startjenkins<br />
<br />
= Automount Samba Shares on boot=<br />
Using systemd,<br />
make an entry in fstab for the network drive<br />
use credentials option<br />
use noauto option<br />
use nofail option<br />
use x-systemd.automount option<br />
set timeout<br />
<br />
= New ZFS Setup =<br />
[root@falcon wolfdogg]# zpool create -f -m /san san ata-ST2000DM001-9YN164_W1E07E0G ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332<br />
[root@falcon wolfdogg]# zpool list<br />
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT<br />
san 5.44T 604K 5.44T 0% 1.00x ONLINE -<br />
[root@falcon wolfdogg]# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 544K 5.35T 136K /san<br />
[root@falcon wolfdogg]# zpool status<br />
pool: san<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
errors: No known data errors<br />
<br />
<br />
= Git three stage web deployment =<br />
<br />
Im now using git to handle my web repos. <br />
<br />
Currently my server environment is not hosted live. The intention is to have a secure development environment containing the sandbox and stage environments. Once i fully explore this setup i will add the 3rd step to push stage to a live shared host using the same methods here. <br />
<br />
<br />
This set up uses /srv/http/sandbox for the development directory. If you insist on developing out of /home/<username>/public_html then you can either substitute all mention of sandbox in this wiki with it, or you can just turn your public_html folder into a symlink leading to /srv/http/sandbox after completing these exercises.<br />
<br />
<br />
The following features detail the environment we are setting up using this wiki:<br />
* Central location for git repositories (--separate-git-dir) /srv/http/repos/git<br />
* Development sandbox /srv/http/sandbox<br />
* Staging ground for final pre-live q/a testing /srv/http/stage<br />
<br />
<br />
Workflow:<br />
* clone your repo into your sandbox area, work on your files<br />
* commit changes from sandbox when each implementation or task is complete, then push those changes to the central sandbox repo (/srv/http/repo/git/website.com) where you will then be able to view them on dev(http://dev.website.com)<br />
* At some point, when ready to stage, a "push" from the central sandbox repo to a separate central stage repo (/srv/http/git/repo/website.com.stage) will be made <br />
* Hooks located in central stage repo will auto deploy to stage(/srv/http/stage/website) when push is ran . (This deploys only the web files, and NOT the hidden .git repo folder which would otherwise make your site vulnerable, hence the reason for this wiki. Otherwise it would be as easy as going to stage and cloning for example)<br />
<br />
== Apache adjustments ==<br />
set up a new system for dev and stage environments<br />
<br />
=== Edit vhost ===<br />
<br />
add or edit /etc/httpd/conf/extras/httpd-vhost.conf file to something similar<br />
<br />
# default route<br />
<VirtualHost *:80><br />
ServerName <hostname><br />
ServerAlias <hostname><br />
VirtualDocumentRoot "/srv/http"<br />
</VirtualHost><br />
<br />
#sandbox (dev) root route<br />
<Virtualhost *:80><br />
ServerName dev<br />
DocumentRoot "/srv/http/sandbox"<br />
</Virtualhost><br />
<br />
#sandbox (dev) subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName dev.sub.com<br />
ServerAlias dev.*<br />
VirtualDocumentRoot "/srv/http/sandbox/%2+"<br />
</VirtualHost><br />
<br />
#stage root route<br />
<Virtualhost *:80><br />
ServerName stage<br />
DocumentRoot "/srv/http/stage"<br />
</VirtualHost><br />
<br />
#stage subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName stage.sub.com<br />
ServerAlias stage.*<br />
VirtualDocumentRoot "/srv/http/stage/%2+"<br />
</VirtualHost><br />
<br />
restart apache <br />
systemctl restart httpd<br />
#or if your still using initscripts<br />
rc.d restart httpd<br />
<br />
=== Create dirs === <br />
<br />
touch /srv/http/index.php<br />
mkdir /srv/http/sandbox<br />
touch /srv/http/sandbox/index.php<br />
mkdir /srv/http/stage<br />
touch /srv/http/stage/index.php<br />
<br />
note, in order for this to work now you will need to add your websites with the same dir name as they will be when deployed, i.e. use dots when dots exists, e.g. if website is google.com the dirname must be google.com, so your dev site will end up being /srv/http/sandbox/google.com so that the vhost will be able to map to it using http://dev.google.com<br />
<br />
=== Edit hosts files ===<br />
<br />
Assuming your development machine is not the same as the server, you need to make sure to add an entry for each site, during creation of the site, to your hosts file. <br />
<br />
i will use server ip 192.168.1.99 and hostname 'myserver' as example, and google.com as our website url example, use your own host name's and website url's in place of them. <br />
<br />
* linux <br />
add the following lines to /etc/hosts<br />
192.168.1.99 <br />
192.168.1.99 dev myserver<br />
192.168.1.99 stage myserver <br />
192.168.1.99 dev.google.com myserver<br />
192.168.1.99 stage.google.com myserver<br />
<br />
* windows<br />
<br />
navigate to the following file logged in as admin, <br />
%WINDIR%/system32/drivers/etc/hosts <br />
<br />
or run cmd as administrator, <br />
paste thie following line there<br />
notepad %WINDIR%/system32/drivers/etc/hosts <br />
<br />
add your sites to the hosts file<br />
192.168.1.99 dev<br />
192.168.1.99 stage<br />
192.168.1.99 dev.google.com<br />
192.168.1.99 stage.google.com<br />
#dev.anothersite.com<br />
#stage.anothersite.com<br />
#etc...<br />
<br />
* mac, you get the idea...<br />
<br />
== Git usage ==<br />
Install git, and make the following changes<br />
<br />
=== Set up Git ===<br />
<br />
* Create a central location for the git repos. (note, ill use a group called 'webteam' for example. This group separates what apache needs permission to from what a web developer needs permission to. each web developer is a member of this group) <br />
$ su<br />
$ mkdir -p /srv/http/repos/git<br />
$ chown http:webteam /srv/http/* -R<br />
$ chmod 775 /srv/http/* -R<br />
<br />
*sandbox push global configs<br />
you will probably get an error stating that you cant push to the branch that you cloned out of, more research needs to be done on this, but it seems pretty straight forward. Set the following config variable to squelch this.<br />
$ git config --global receive.denyCurrentBranch warn #you can set it to false instead of warn once your sure it doesnt cause any problems<br />
<br />
* stage push global configs<br />
Git version 2.0+ will start using a new push.default standard called "simple". "simple" indicates that the push will refuse if the upstreams branch name is different than the local one, which in this case it is. (website.com vs. stage.website.com) If your version is less than 2.0, which to date isn't out yet, then we can make it forward compatible. Set the following config variable to squelch this.<br />
$ git config --global push.default matching<br />
<br />
=== Add or enroll a new site ===<br />
Once your Git environment is setup all you have to do is start from this point each time you want to create a new website. <br />
<br />
* add a website (or see below if you are enrolling an existing site)<br />
$ mkdir -p /srv/http/sandbox/<website.com> # -p incase you dont have a sandbox dir yet<br />
*alternatively enroll an existing website<br />
To enroll an existing website, move it to the /srv/http/sandbox directory, being sure to rename it to conform to the actual address that it will be accessible via live (naming is important here, e.g. if your folder is named website-com, mv it to website.com. The .com portion is not mandatory if there is no domain name intended for it, just know that you will be accessing it in your browser exactly the same way as you name it, unless you muck around with the vhosts file to customize your own mapping strategy. )<br />
$ mv /home/<username>/public_html/website-com /srv/http/sandbox/website.com<br />
<br />
*continue here once you relocated your website or added a new website to the sandbox<br />
$ cd /srv/http/sandbox/website.com<br />
$ git init --separate-git-dir=/srv/http/repos/git/<website><br />
$ git add -A<br />
$ git commit -m 'comment here'<br />
<br />
If your going to actively develop this site, then continue below to stagify the site. If your just archiving an old site for later development you can stop here, delete the site folder ( e.g./srv/http/sandbox/website.com) then "recreate the worktree", as detailed below, from the repo and stageify at a later time.<br />
<br />
=== Stageify ===<br />
* create empty stage repo for this site (start here if you already have an existing site and repo substitute sandbox with your site location)<br />
$ mkdir -p /srv/http/repos/git/website.com.stage<br />
$ cd /srv/http/repos/git/website.com.stage<br />
$ git init --bare<br />
<br />
* make sites stage dir and create the hook (while still in the stage repo)<br />
$ mkdir /srv/http/stage/website.com<br />
$ cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/srv/http/stage/website.com git checkout -f<br />
<br />
#press ctrl +d to save and exit cat<br />
<br />
$ chmod +x hooks/post-receive<br />
<br />
* Define stage mirror and create a master branch tree (you MUST run both the following commands from the sandbox repo, the sandbox checkout NOR the stage repo will suffice)<br />
$ cd /srv/http/repos/git/website.com #note, important that your in this exact path, as opposed to website.com.stage (if your sandbox .git dir is somewhere else then cd INTO it)<br />
$ git remote add stage.website.com /srv/http/repos/git/website.com.stage #(note, stage.website.com is just a name, its what we will use to declare as teh repo name that we will call when we stage it)<br />
$ git push stage.website.com +master:refs/heads/master<br />
<br />
* the push to stage will be run simply from here forward, from within your root code base.. (i.e. you dont need to be in the .git dir anymore)<br />
$ git push stage.website.com<br />
See workflow below for more examples on this.<br />
<br />
* In case of upstream branch error when you run the push, you can do the following.<br />
Since we have globally configured the push.default to matching, an upstream branch error may occur. To squelch this error we need to indicate our upstream branch. Since we named out branch "stage.website.com" we issue the following command from our sandbox repo (/srv/http/repos/git/website.com)<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push --set-upstream stage.website.com master<br />
<br />
== General work flow ==<br />
<br />
Once you have your environment all setup, you can follow a general work flow pattern in order to take advantage of the full functionality.<br />
<br />
=== New development process ===<br />
<br />
* If you plan to just archive your site, you may want to delete it from the sandbox once its archived to the git repository using the steps above, since its nicely compressed and packed away into your repo dir. If this is the case then go head and delete your sandbox web folder at this point so we can begin the flow from the furthest possible point. (note, only do this step if you have your website commited to the git repo as outlined above, i.e. your .git folder is NOT located inside your code base dir), '''otherwise continue "Work on files" below'''.<br />
$ cd /srv/http<br />
$ rm website.com -rf<br />
<br />
You might want to delete your worktree after putting a project on the back-burner, and are not planning on working on it for long periods of time. Of you may have a crowded project directory and want to get your sanity back. If you deleted your worktree previously, or are restoring an old site from a repo for whatever reason, and your project is stored only inside the compressed repo, then you will need to re-create the worktree from the repo before working on your files. The following methods will guide you through doing this. <br />
<br />
==== recreate worktree ====<br />
<br />
Note, you wont need to do these steps if you still have your worktree in place and never deleted it. <br />
<br />
===== method 1 ===== <br />
(preferred method)<br />
<br />
One way to get the worktree back is to extract the worktree out of the repo, then attach it to the repos branch<br />
* make the new site folder inside your sandbox<br />
$ mkdir /srv/http/sandbox/website.com<br />
<br />
* switch to the new worktree directory and ''init'' for the first time<br />
$ cd /srv/http/sandbox/website.com<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
alternatively you can run those last two commands on one line if its easier for you<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init && echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
<br />
* check the branch your on before pulling down the files<br />
$ git branch<br />
<br />
* switch to the repo and extract the files to the new site directory you just created. Make sure your checking out the branch you intend, normally that will be "master" unless you have branched the repo. Replace the word "master" in the following command if you are working on a different branch.<br />
$ cd /srv/http/repos/git/website.com<br />
$ git archive master | tar -x -C /srv/http/sandbox/website.com<br />
note, if you get an error in the above step <br />
could not switch to '/some/dir'': No Such File or directory<br />
then you may be trying to get the files back to a directory other than what was used when the repo was made, ''otherwise continue below''. <br />
<br />
If you did receive this error then you need to edit the config file in the repo to point to the new directory. To do this, open the ./config file in your favorite editor and edit the worktree path to point to the new site directory that you created.<br />
<br />
===== method 2 (alt method) ===== <br />
Another way to get the worktree back is to clone it out from the main repo, then delete the .git dir, then create the .git symlink. This way just seems a bit messy because one needs to delete the .git dir after unecessarily creating it, but as long as you replace it with the proper symlink it works fine. <br />
<br />
* clone the site to sandbox<br />
$ cd /srv/http/sandbox<br />
$ git clone /srv/http/repos/git/website.com<br />
* delete the hidden .git folder that was created inside this worktree<br />
$ rm -rf .git<br />
* recreate the .git symlink <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
* checkout master to get things back on track (is this step even necessary?)<br />
$ git checkout master<br />
<br />
==== Work on files ====<br />
<br />
* Work on files<br />
$ cd website.com <br />
$ git status<br />
edit your files.....<br />
<br />
==== commit the changes ====<br />
<br />
$ git status #notice files are not staged for commit<br />
$ git add -A # or just individual files (e.g. git add file1.php file2.php) etc..<br />
$ git status # make sure the files you want to commit are listed to be committed<br />
$ git commit -m 'this is what i did to these files'<br />
$ git status # should come back clean<br />
$ git log # you can see your commit comments here<br />
<br />
visit http://dev.website.com to test the new functionality that was just committed<br />
<br />
If all looks well, you need to push your changes to the sandbox branch master(repo) that you cloned from. This accomplishes two things<br />
<br />
* liberates your sandbox clone to be deleted at will<br />
* updates branch master so the files are waiting to be pushed(deployed) to stage<br />
$ git push<br />
<br />
=== Deploy to stage ===<br />
Once your separate tasks have been individually commited, and you have thoroughly tested then in the sandbox environment, you can now deploy to the stage environment for final q/a before going live. <br />
<br />
*deploy to stage<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push stage.website.com<br />
<br />
now visit http://stage.website.com to do your final q/a<br />
<br />
If any mistakes are spotted on stage, i.e. if everything is not working as expected, then your stage environment just paid for itself. Follow the below steps for damage control to revert changes back on stage <br />
<br />
=== Damage control ===<br />
<br />
If mistakes are spotted on stage, we need to revert our changes<br />
<br />
* @todo: commands for reverting<br />
* @todo: workflow<br />
<br />
<br />
now that stage has been reset to the last-known-good, go back to the sandbox and edit<br />
<br />
== Why separate repository from web folder? ==<br />
<br />
For me, it just makes sense. The beauty of it is you can bulk delete your all or any of your web files in the sandbox area when they start to get in the way. When your ready to work on that site again, you can just "recreate the worktree".<br />
<br />
It may also help to have the repos outside of the web folders if you run any type of incremental backups, e.g. rsnapshot, or rsync. You can choose to backup only the repos, which are compressed, or to backup everything except sandbox for example. Mainly the fact that you can delete your sandbox websites without losing the repo is the main reason.<br />
<br />
== Credits ==<br />
Thanks to <br />
* Abhijit Menon-Sen at http://toroid.org/ams/git-website-howto which is where all my google searches finally landed me and payed-off as to what i was trying to accomplish.<br />
<br />
* niks http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked for getting <br />
and<br />
Charles Bailey http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked <br />
for getting me over the hump of getting the worktree properly out of a repo archive<br />
<br />
= ZFS-FUSE Implementation =<br />
<br />
Im trying to utilize the ZFS filesystem to create a flexible array of different sized drives that will serve as a data-backup volume shared across a samba network. ZFS can be taken advantage of here because of its flexibility using storage pools. ZFS also has some of the best features catering to data integrity. One drawback is some of this is done at the expense of file transfer rates. There are some things you can do to offset this however by having small backup drives, even usb stick or just about any other type of media serving as a cache drive. Speed will not be a consideration on this array since its main role is data integrity and safety. <br />
<br />
The test array initially used here was an array of 3 dives, one 2Tb, and 2 500GB's. LVM was also explored to find ways to span drives into one large volume. <br />
<br />
== Installation ==<br />
<br />
1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory. <br />
<br />
went to AUR, copy and pasted the PKGBUILD contents into a new PKGBUILD file in my packages directory /home/wolfdogg/packages/packages/yaourt/PKGBUILD<br />
ran the makepkg -s from that folder. after battling a successtion of errors (invalid signatures in pacman, needed to re-init pacman-key, then had to create new group called sudo, and add my user to it, then edit the /etc/sudoers file accordingly, which seemed like the best way to get my user into sudoers, and eliminate the risk of accidentally doing something from root during the makepkg process)<br />
and finally, pacman -U yaourt<br />
<br />
2)Then i followed the steps on the wiki on https://wiki.archlinux.org/index.php/ZFS_on_FUSE . For myself, the NFS portion got a bit confusing here https://wiki.archlinux.org/index.php/ZFS_on_FUSE#NFS_shares . Here in this section of the wiki [code]zfs set sharenfs=[/code] i got a bit side-tracked and will come back to it at a later time. <br />
<br />
I found the manual for Solaris ZFS here [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/] and it looks like it will provide all the info needed, especially this section of it [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s03.html]<br />
<br />
Now would be a good time to read the manuals<br />
#man zpool<br />
#man zfs<br />
<br />
== Inventory drives ==<br />
<br />
*Hook up any drives that you want to add to your new filesystem. <br />
<br />
*Take an inventory of your current drives and partitios. Note any existing arrays and partitions. See md0 (mdadm array), zfs-kstat/pool (a zfs pool), and /pool/backup (a dataset) in the example below.<br />
<br />
*View the drive information<br />
# blkid -o list -c /dev/null<br />
<br />
*List the partitions, and inspect your mounts<br />
# lsblk -f<br />
<br />
sdb 8:16 0 1.8T 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
sdc 8:32 0 465.8G 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
<br />
*Use one of the following for each disk if you want to view partition tables and or reformat your drives:<br />
*If you have a MBR partition table <br />
#fdisk -l<br />
*If you have GPT partition table<br />
#gdisk -l /dev/sdb<br />
<br />
*View the mounts<br />
# findmnt<br />
<br />
TARGET SOURCE FSTYPE OPTIONS<br />
/zfs-kstat kstat fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other<br />
└─/pool pool fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
└─/pool/backup pool/backup fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
<br />
== Prepare Drives ==<br />
If you already know what your doing, you can use cgdisk, here is a nice link for cgdisk reference [http://www.rodsbooks.com/gdisk/cgdisk-walkthrough.html]. Otherwise, follow below to see a gdisk example walkthrough.<br />
# gdisk<br />
Type device or filename<br />
We will use /dev/sdb for this example<br />
# /dev/sdb<br />
Command:<br />
o will create a new partition table<br />
# o<br />
Proceed? <br />
This option will delete all partition and create a new protective MBR<br />
# y<br />
Command:<br />
n will create a new partition<br />
# n<br />
Partition number<br />
We should use 1 if its the first<br />
# 1<br />
Press enter for default first sector<br />
Press enter for default last sector<br />
Hexcode: <br />
# L<br />
L Shows all codes, choose a filesysetm hex code and type it in<br />
We will go with bf00 (Solaris root) for this example<br />
# bf00<br />
Now lets write partition to disk<br />
COmmand:<br />
# w<br />
Final checks complete<br />
Do you want to proceed<br />
# y<br />
<br />
Now your disk has a new clean partition table<br />
<br />
Here are some steps to follow, so far its all i have.<br />
<br />
=== Create the Zpool ===<br />
<br />
Create the pool (notice im not using raidz here, you may want to if your drives are all the same size)<br />
# zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd /dev/sde<br />
*Also note, you should create two pools if you have 4 or more drives so that the redundancy portion that zfs is so well known for will work properly.<br />
<br />
"pool" is the name of the current test pool, and the mount point.<br />
<br />
'''RAIDZ1, RAIDZ2'''<br />
<br />
*To create a RAIDZ if you have atleast 3 drives of the same size <br />
*'raidz' is an alias for 'raidz1', or you can use 'raidz2' if you have an extra drive for an extra stripe for extra redundancy.)<br />
# zpool create pool raidz /dev/sdb /dev/sdc /dev/sdd<br />
<br />
'''Alternatively, create a raid span.''' Note, no raid type has been specified (disk, file, raidz, etc...)<br />
# zpool create pool /dev/sdb /dev/sdc /dev/sdd<br />
<br />
(see below to create an mdadm linear span instead (jbod)) <br />
<br />
if you do some testing and get stuck with a drive reporting unavailable after you hook it back up, sometimes i have to reboot which i probably just don't know enough about yet, but some commands that have been helpful<br />
<br />
Get the list and size of the pool<br />
# zpool list<br />
<br />
To get the status<br />
#zpool status<br />
<br />
=== Create the ZFS Filesystem Hierarchy (dataset) ===<br />
<br />
Lets create a dataset <br />
# zfs create pool/backup<br />
# zfs set mountpoint=/backup pool/backup<br />
<br />
you can create a child file system inside the parent, the child file system is would automatically be inherited <br />
# zfs create pool/backup/<computer_name><br />
<br />
Per the manual, ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /dev/vfstab)<br />
<br />
=== List and inspect ===<br />
<br />
List and inspect your new zfs file system<br />
# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
pool 106K 2.68T 21K /pool<br />
pool/backup 21K 2.68T 21K /pool/backup<br />
<br />
Now lets look at all drives to see what this looks like. View mount status and disk size<br />
# mount -l<br />
<br />
# df<br />
Filesystem 1K-blocks Used Available Use% Mounted on<br />
rootfs 15087420 2356784 11962328 17% /<br />
dev 1991168 0 1991168 0% /dev<br />
run 2027072 316 2026756 1% /run<br />
/dev/sda3 15087420 2356784 11962328 17% /<br />
shm 2027072 0 2027072 0% /dev/shm<br />
tmpfs 2027072 28 2027044 1% /tmp<br />
/dev/sda4 95953460 84172112 6904016 93% /home<br />
/dev/sda1 99550 19445 74886 21% /boot<br />
pool 2873622443 23 2873622420 1% /pool<br />
pool/backup 2873622441 21 2873622420 1% /pool/backup<br />
<br />
To learn more about your coinfiguration options run the following command<br />
# zfs get all | less<br />
<br />
=== Destroy Array and Pools ===<br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
If you need to break down your zpool or zfs datasets follow below. <br />
<br />
Deactivate the array using mdadm RAID manager (unmount)<br />
# mdadm -S /dev/md0<br />
<br />
I chose to delete the line in /etc/mdadm.conf as well. not sure if it was needed but it seemed that there was still bits and pieces lying around that needed removing<br />
# vim /etc/mdadm.conf<br />
<br />
To destroy a dataset in the pool <br />
#zfs destroy <filesystemvolume><br />
<br />
Or you can destroy the entire pool<br />
#zpool destroy <pool><br />
<br />
If you cant totaly destroy the pool, or are trying to create a new pool with the same name its possible to trace clues about what process is using it using <br />
#fuser /pool -a <br />
<br />
Then run top to find that process PID<br />
#top<br />
<br />
Or run lsof<br />
# lsof | zfs-fuse | less<br />
Go from there....<br />
<br />
Now move on to [[#Linear_RAID_.28jbod.29_filesystem_using_mdadm_.2F_lvm]] or [[#RAIDZ_Filesystem_Configuration]]<br />
<br />
== Linear RAID (jbod) filesystem using mdadm / lvm ==<br />
<br />
I decided to explore other options until i can figure out wether the ZFS span is set up properly using ZFS. <br />
<br />
=== Create the array ===<br />
<br />
Destroy existing zfs pool and break down the array where applicable. use this as a guide <br />
[[#Destroy_Array_and_Pools]]<br />
<br />
Use mdadm to create the span<br />
# mdadm --create /dev/md0 --level=linear --raid-devices=3 /dev/sd[bcd]<br />
<br />
Get the status detail using mdadm <br />
<br />
# mdadm --misc --detail /dev/md0<br />
/dev/md0:<br />
Version : 1.2<br />
Creation Time : Mon Jun 25 13:52:31 2012<br />
Raid Level : linear<br />
Array Size : 2930286671 (2794.54 GiB 3000.61 GB)<br />
Raid Devices : 3<br />
Total Devices : 3<br />
Persistence : Superblock is persistent<br />
Update Time : Mon Jun 25 13:52:31 2012<br />
State : clean<br />
Active Devices : 3<br />
Working Devices : 3<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
Rounding : 0K<br />
Name : falcon:0 (local to host falcon)<br />
UUID : 141f34d0:2b2c0973:4a6f070b:17b772ec<br />
Events : 0<br />
Number Major Minor RaidDevice State<br />
0 8 16 0 active sync /dev/sdb<br />
1 8 32 1 active sync /dev/sdc<br />
2 8 48 2 active sync /dev/sdd<br />
<br />
=== Create volume and groups ===<br />
<br />
Create the physical volume<br />
# pvcreate /dev/md0<br />
<br />
Display the volume<br />
# pvdisplay<br />
"/dev/md0" is a new physical volume of "2.73 TiB"<br />
--- NEW Physical volume ---<br />
PV Name /dev/md0<br />
VG Name<br />
PV Size 2.73 TiB<br />
Allocatable NO<br />
PE Size 0<br />
Total PE 0<br />
Free PE 0<br />
Allocated PE 0<br />
PV UUID 1Hr3ay-L0mZ-33GD-4ZeM-EOtW-lkAz-M4REMJ<br />
<br />
Create the volume group<br />
# vgcreate VolGroupArray /dev/md0<br />
Volume group "VolGroupArray" successfully created<br />
<br />
Display the volume groups<br />
# vgdisplay<br />
--- Volume group ---<br />
VG Name VolGroupArray<br />
System ID<br />
Format lvm2<br />
Metadata Areas 1<br />
Metadata Sequence No 1<br />
VG Access read/write<br />
VG Status resizable<br />
MAX LV 0<br />
Cur LV 0<br />
Open LV 0<br />
Max PV 0<br />
Cur PV 1<br />
Act PV 1<br />
VG Size 2.73 TiB<br />
PE Size 4.00 MiB<br />
Total PE 715401<br />
Alloc PE / Size 0 / 0<br />
Free PE / Size 715401 / 2.73 TiB<br />
VG UUID 2OAWpT-fO50-A7cW-jUjd-meQh-sWQa-7b55ZO<br />
<br />
Create the logical volume, i used 2.725 because 2.73 failed<br />
# lvcreate VolGroupArray -L 2.725T -n backup<br />
<br />
Display volume<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
Check the status <br />
# cat /proc/mdstat<br />
Personalities : [linear]<br />
md0 : active linear sdd[2] sdc[1] sdb[0]<br />
2930286671 blocks super 1.2 0k rounding<br />
unused devices: <none><br />
<br />
''@TODO Need advice here - this portion needs testing, i got stuck here last time it tried this.''<br />
<br />
== Recreating a ZFS storage pool ==<br />
<br />
From my understanding one can add a drive, to a vdev (Virtual Device, or array set) without losing data, if its a linear span, but you cant remove a drive from the vdev without first copying the files and then deleting the pool then recreating it. This portion will walk you through completely tearing down the array, freeing up the drives, and recreating either a mdadm jbod linear span or a RAIDZ. <br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
=== List the pools ===<br />
<br />
get a list of the current pools<br />
#zpool list<br />
<br />
check status <br />
# zpool status<br />
<br />
list datasets<br />
#zfs list<br />
<br />
== Share mount to network using samba ==<br />
<br />
Configure samba to give user access to the share (good for network backups from any operating system, including windows backup) <br />
*add the following entry to /etc/samba/smb.conf and restart the samba daemon<br />
[backup]<br />
comment = backup drive<br />
path = /pool/backup<br />
valid users = user1,user2<br />
read only = No<br />
create mask = 0765<br />
wide links = Yes<br />
<br />
#rc.d restart samba<br />
<br />
modify the permissions on pool/backup so they are accesible over the network, i decided to add 'user' group accessibility<br />
# chown root:users backup<br />
# chmod g+w backup<br />
<br />
Alternatively<br />
# chown root:root backup/<br />
# chmod 755 backup/<br />
# chown root:users backup/<dataset1><br />
# chmod 775 backup/<dataset1><br />
<br />
== ZFS RAID Maintenance ==<br />
<br />
@TODO - this section is not finished<br />
<br />
==== Maintenance ====<br />
<br />
*To place a disk back online (see manual for this)<br />
#zpool online<br />
<br />
*To replace a disk (see manual for this)<br />
#zpool replace<br />
<br />
* If your pool goes down, i.e. one of your drives goes offline and not enough drives to complete replication you can try the following<br />
- Check if the drive is initiated, you should see it in the list thats returned from running the following command<br />
# blkid<br />
If its not in the list:<br />
- Check all cable connections, the drive may not be mounted. A reboot may be needed. <br />
- Once you get the drive to appear then run following commands.<br />
# zpool export <pool><br />
# zpool import <pool><br />
# zpool list<br />
If you get an error message <br />
"cannot import 'pool': one of more devices is currently unavailable. Destroy and re-create the pool from a backup source" try to export then import again.<br />
@todo more information is needed before suggestions are made at this point.<br />
<br />
== Help Needed ==<br />
<br />
If anybody has any suggestions, please chime in. <br />
<br />
I havent figured out how to adress the size of the array yet. For example, when i hook up only the 2TB with the 500GB as a raidz, it will let me, and the size reports 1.36TB. When i destroyed the pool and rebuilt using all 3 drives (2TB,500GB,500GB) the size still reports 1.36TB<br />
<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 00:18, 25 June 2012 (UTC)<br />
<br />
= volnoti on KDE using alsamixer =<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.<br />
<br />
= smbclient media stream issues using dolphin when accessing windows shares =<br />
<br />
== description ==<br />
I was having access problems when trying to access files via smbclient, or samba as a client, accessing shares on a win7 file server, '''only''' when the user exists on the windows machine. It turns out the problem is possibly either a complicated mounting issue, or a deep bug in dolphin. <br />
<br />
Before i delve into this, i also wanted to mention, the windows machine was set to not share files as "password protected sharing" http://www.sevenforums.com/tutorials/185429-password-protected-sharing-turn-off-windows-7-a.html. If you DO share with windows password protected sharing, when you acccess the samba shares through dolphin, it will issue a popup asking you for a password, even if you put in valid credentials you will still experience the issue, so dont spend too much time debugging by turning on and off password protected sharing as it didnt seem to help either way.<br />
<br />
== the bug reproduced ==<br />
The way i was accessing the shares was through dolphin, by clicking on on the "Network" places, then by clicking into the "Samba" symlink. There i would see a list of workgroups, click into those to access my files, and so forth. I would find a directory i wanted to add to My "Places", i would right click on it and add to my places. <br />
<br />
When i would access these media files through either the symlink i created in my Places, or through the existing Network symlink in my places, once i navigated to a video file, .avi in this case, no matter if i would choose VLC, or mplayer, the system would need to cache in full before it would play. This meant instead of the video starting right away, i would have to wait sometimes 10-15 minutes before the video would start, or i would get an error(VLC is unable to open the MRL 'smb://<server>/UserFiles/PublicArchive/movie.avi), depending on how the windows password protect was set, or if i was using vlc vs. mplayer, etc... Obviously something was wrong. Now i'm sure these symptoms ran much deeper than just video files, for example, i remember it happening to audio files, but i think it will probably happen to even text files or any file thats notible large enough to take more than 3 seconds to access across a network. <br />
<br />
Important note, this only happens when im logged into KDE as a user that already exists on the windows system im trying to access the publicly shared files. If i were to log in to KDE as a user that doesn't exist the files would not have to cache, but instead would immediately start to stream as expected. <br />
<br />
Something really buggy is going on at this point. So i thought maybe i would go into the KDE system settings > sharing and set the default username and password but this didn't help. i tried several things, including re-installing network driver, re-installing kde over the top of itself, userdel the user, clean out the home directory, nothing worked. <br />
<br />
So once again, as i have done in the past to try to solve this issue, i decided i would try to manually mount the thing again and gain access differently that using the network icon in dolphin. This time i was following the wiki as usual, and i got to the part about manually mounting shares where i stumbled on one line that mentioned the /mnt directory that i have seen so many times before. This time the cards were lined up right i guess because i decided to click through "Root" ((in places) using dolphin, then navigated my way through /mnt/smbnet and onto my files this way where i discovered they play no problem (doesnt need to cache, starts streaming immediately).<br />
<br />
== the fix ==<br />
It appears that the symlink "Network" in Dolphin 'Places' bar, at least the way its currently set up in my version of kde4, is there to make life miserable. Dont use it if you dont want to have to fully download the file before your system will have access to it. Don't access your network shares this way if they are on a samba share. Instead, navigate through Dolphins "Root" /mnt/smbnet/<your-workgroup>/<your-server>/<your-fileshares> instead, and right click on one of those folders and "Add To Places", then you will have proper a proper symlink on your "Places" to access through /mnt.<br />
<br />
= steps taken to repair Intel IbexPeak HDMI / IDT 92HD81B1X5 internal mic =<br />
<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=F1734<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#2 | grep Codec<br />
cat: /proc/asound/card0/codec#2: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#1 | grep Codec<br />
cat: /proc/asound/card0/codec#1: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec<br />
cat: /proc/asound/card0/codec: No such file or directory<br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card0/<br />
[root@osprey card0]# ll<br />
total 0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#3<br />
-rw-r--r-- 1 root root 0 Apr 6 00:49 eld#3.0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 id<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0c<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0p<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm3p<br />
[root@osprey card0]# cat /proc/asound/card0/codec#0 | grep Codec<br />
Codec: IDT 92HD81B1X5<br />
[root@osprey card0]# cat /proc/asound/card0/codec#3 | grep Codec<br />
Codec: Intel IbexPeak HDMI<br />
[root@osprey card0]# cat /proc/asound/card0/eld#3.0 | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/id | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# pacman -S gstreamer0.10-plugins<br />
:: There are 5 members in group gstreamer0.10-plugins:<br />
:: Repository extra<br />
1) gstreamer0.10-bad-plugins 2) gstreamer0.10-base-plugins 3) gstreamer0.10-ffmpeg<br />
4) gstreamer0.10-good-plugins 5) gstreamer0.10-ugly-plugins <br />
<br />
Enter a selection (default=all): <br />
warning: gstreamer0.10-bad-plugins-0.10.23-3 is up to date -- reinstalling<br />
warning: gstreamer0.10-base-plugins-0.10.36-1 is up to date -- reinstalling<br />
warning: gstreamer0.10-ffmpeg-0.10.13-1 is up to date -- reinstalling<br />
resolving dependencies...<br />
looking for inter-conflicts...<br />
<br />
Targets (10): gstreamer0.10-ugly-0.10.19-5 libavc1394-0.5.4-1 libiec61883-1.2.0-3<br />
libsidplay-1.36.59-5 wavpack-4.60.1-2 gstreamer0.10-bad-plugins-0.10.23-3<br />
gstreamer0.10-base-plugins-0.10.36-1 gstreamer0.10-ffmpeg-0.10.13-1<br />
gstreamer0.10-good-plugins-0.10.31-1 gstreamer0.10-ugly-plugins-0.10.19-5<br />
<br />
Total Download Size: 1.00 MiB<br />
Total Installed Size: 13.08 MiB<br />
Net Upgrade Size: 3.63 MiB<br />
<br />
Proceed with installation? [Y/n] <br />
:: Retrieving packages from extra...<br />
gstreamer0.10-base-plugi... 165.3 KiB 963K/s 00:00 [#############################] 100%<br />
libavc1394-0.5.4-1-x86_64 32.0 KiB 759K/s 00:00 [#############################] 100%<br />
libiec61883-1.2.0-3-x86_64 37.3 KiB 829K/s 00:00 [#############################] 100%<br />
wavpack-4.60.1-2-x86_64 113.7 KiB 921K/s 00:00 [#############################] 100%<br />
gstreamer0.10-good-plugi... 327.3 KiB 1124K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-0.10.... 160.4 KiB 908K/s 00:00 [#############################] 100%<br />
libsidplay-1.36.59-5-x86_64 107.8 KiB 771K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-plugi... 84.4 KiB 727K/s 00:00 [#############################] 100%<br />
(10/10) checking package integrity [#############################] 100%<br />
(10/10) loading package files [#############################] 100%<br />
(10/10) checking for file conflicts [#############################] 100%<br />
(10/10) checking available disk space [#############################] 100%<br />
( 1/10) upgrading gstreamer0.10-bad-plugins [#############################] 100%<br />
( 2/10) upgrading gstreamer0.10-base-plugins [#############################] 100%<br />
( 3/10) upgrading gstreamer0.10-ffmpeg [#############################] 100%<br />
( 4/10) installing libavc1394 [#############################] 100%<br />
( 5/10) installing libiec61883 [#############################] 100%<br />
( 6/10) installing wavpack [#############################] 100%<br />
( 7/10) installing gstreamer0.10-good-plugins [#############################] 100%<br />
<br />
(gconftool-2:5520): GConf-WARNING **: Client failed to connect to the D-BUS daemon:<br />
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. <br />
( 8/10) installing gstreamer0.10-ugly [#############################] 100%<br />
( 9/10) installing libsidplay [#############################] 100%<br />
(10/10) installing gstreamer0.10-ugly-plugins [#############################] 100%<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
<br />
= Git Remote Development =<br />
<br />
==Configure remote==<br />
===Set up remote repos===<br />
<br />
*Set up the bare repo and public served repo (for websites)<br />
<br />
$ ssh user@remote<br />
# mkdir /home/user/git/site.com.git (or /var/www/git/site.com etc.. if not shared hosting.)<br />
# cd /home/user/git/site.com.git<br />
# git init --bare --shared (or not shared)<br />
<br />
*Choose only option a or option b in the following steps<br />
<br />
*Now chooose one of the following, either a or b, dont do them both. <br />
**a) If you want to always edit files locally and not have to do a pull from served location, which pretty much automates the file uploads upon push. <br />
**b) If you want the ability to also edit files directly on the served location as well then you might want to keep them in their own git repository, therefore you will be cloning the bare repo out to the sites served directory. Each time you do a push from your local, you will then need to do a pull from this remote cloned repo after each commit before you will see your changes live. <br />
<br />
===a) Post-receive hooks for automation===<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server''' <br />
<br />
*Make post-receive hook in bare repo which will ship files automatically into its served location<br />
<br />
# cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/home/soldiert/public_html/project.com/ git checkout -f <br />
// (press ctrl_d to exit and save)<br />
<br />
# chmod +x hooks/post-receive (same path as what you put in hooks/post-receive)<br />
# mkdir /home/user/git/site.com<br />
# exit<br />
<br />
===b) Clone for more control====<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server'''<br />
<br />
*Clone remote bare into directory that it will be served out of to gain more control over editing from both locations. <br />
<br />
# cd /home/user/public_html<br />
# clone /home/user/git/site.com.git <br />
# exit<br />
<br />
=== Start tracking on local===<br />
*Start tracking on local if you haven't already<br />
$ cd ~/projects/site.com<br />
# git --init<br />
<br />
===Now set up connection to remote origin on your local git repo===<br />
$ cd ~/projects/site.com<br />
$ git remote origin -v <br />
<br />
*Add your bare repo as the new origin. <br />
*Note, if you your origin is already in tact then skip the remove and add origin steps below<br />
$ git remote rm origin (if its still attached to git hub or someplace else that you dont want the files going)<br />
$ git remote add origin user@remote:/home/user/public_html/site.com/.git<br />
<br />
*Push your files<br />
$ git push origin master (will push to bare repo, then hook will check it out to served location. <br />
<br />
===Ready to develop on local=== <br />
$ cd ~/projects/site.com<br />
$ vim index.html<br />
$ git add -A<br />
$ git commit -am 'first commit message'<br />
$ git status<br />
$ git log<br />
<br />
===After commiting run the following===<br />
$ git push<br />
<br />
*Your'e done. <br />
<br />
*If you chose option b then you now you need to run the following commands<br />
$ ssh user@remote<br />
# cd /home/user/public_html/site.com<br />
# git pull<br />
*Your'e done.<br />
<br />
= PHP X-Debug=<br />
<br />
This topic covers installing X-Debug on a LAMP server.<br />
<br />
== Installation ==<br />
pacman -S xdebug<br />
== Configuration ==<br />
* add the following line to your php.ini on the bottom of the extensions list<br />
zend_extension="/lib64/php/modules/xdebug.so"<br />
* or use the non 64 bit one if needed<br />
zend_extension="/lib/php/modules/xdebug.so"<br />
* restart your server<br />
systemctl restart httpd<br />
* ensure x-debug is enabled in phpinfo<br />
php -i | grep xdebug | less<br />
<br />
== PHPStorm usage ==<br />
This step details configuring xdebug for use in development on a development machine, separate from the LAMP server, which has PHP-Storm installed. <br />
== References ==<br />
<br />
xdebug docs<br />
http://xdebug.org/docs/<br />
<br />
xdebug checker http://xdebug.org/find-binary.php<br />
<br />
troubleshooting info<br />
http://stackoverflow.com/questions/20752260/trouble-setting-up-and-debugging-php-storm-project-from-existing-files-in-mounte<br />
<br />
phpstorm zero configuration<br />
http://blog.jetbrains.com/phpstorm/2013/07/webinar-recording-debugging-php-with-phpstorm/<br />
<br />
phpstorm configurations<br />
https://www.jetbrains.com/phpstorm/webhelp/configuring-xdebug.html</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=User_talk:Wolfdogg&diff=370361User talk:Wolfdogg2015-04-20T17:57:42Z<p>Wolfdogg: </p>
<hr />
<div>= Samba from OpenLDAP on Arch = <br />
== samba specific setup ==<br />
<br />
<br />
<br />
<br />
= Jenkins =<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
<br />
=== create an automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
run it this way now<br />
$startjenkins<br />
<br />
= Automount Samba Shares on boot=<br />
Using systemd,<br />
make an entry in fstab for the network drive<br />
use credentials option<br />
use noauto option<br />
use nofail option<br />
use x-systemd.automount option<br />
set timeout<br />
<br />
= New ZFS Setup =<br />
[root@falcon wolfdogg]# zpool create -f -m /san san ata-ST2000DM001-9YN164_W1E07E0G ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332<br />
[root@falcon wolfdogg]# zpool list<br />
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT<br />
san 5.44T 604K 5.44T 0% 1.00x ONLINE -<br />
[root@falcon wolfdogg]# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
san 544K 5.35T 136K /san<br />
[root@falcon wolfdogg]# zpool status<br />
pool: san<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
NAME STATE READ WRITE CKSUM<br />
san ONLINE 0 0 0<br />
ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ONLINE 0 0 0<br />
ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 ONLINE 0 0 0<br />
errors: No known data errors<br />
<br />
<br />
= Git three stage web deployment =<br />
<br />
Im now using git to handle my web repos. <br />
<br />
Currently my server environment is not hosted live. The intention is to have a secure development environment containing the sandbox and stage environments. Once i fully explore this setup i will add the 3rd step to push stage to a live shared host using the same methods here. <br />
<br />
<br />
This set up uses /srv/http/sandbox for the development directory. If you insist on developing out of /home/<username>/public_html then you can either substitute all mention of sandbox in this wiki with it, or you can just turn your public_html folder into a symlink leading to /srv/http/sandbox after completing these exercises.<br />
<br />
<br />
The following features detail the environment we are setting up using this wiki:<br />
* Central location for git repositories (--separate-git-dir) /srv/http/repos/git<br />
* Development sandbox /srv/http/sandbox<br />
* Staging ground for final pre-live q/a testing /srv/http/stage<br />
<br />
<br />
Workflow:<br />
* clone your repo into your sandbox area, work on your files<br />
* commit changes from sandbox when each implementation or task is complete, then push those changes to the central sandbox repo (/srv/http/repo/git/website.com) where you will then be able to view them on dev(http://dev.website.com)<br />
* At some point, when ready to stage, a "push" from the central sandbox repo to a separate central stage repo (/srv/http/git/repo/website.com.stage) will be made <br />
* Hooks located in central stage repo will auto deploy to stage(/srv/http/stage/website) when push is ran . (This deploys only the web files, and NOT the hidden .git repo folder which would otherwise make your site vulnerable, hence the reason for this wiki. Otherwise it would be as easy as going to stage and cloning for example)<br />
<br />
== Apache adjustments ==<br />
set up a new system for dev and stage environments<br />
<br />
=== Edit vhost ===<br />
<br />
add or edit /etc/httpd/conf/extras/httpd-vhost.conf file to something similar<br />
<br />
# default route<br />
<VirtualHost *:80><br />
ServerName <hostname><br />
ServerAlias <hostname><br />
VirtualDocumentRoot "/srv/http"<br />
</VirtualHost><br />
<br />
#sandbox (dev) root route<br />
<Virtualhost *:80><br />
ServerName dev<br />
DocumentRoot "/srv/http/sandbox"<br />
</Virtualhost><br />
<br />
#sandbox (dev) subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName dev.sub.com<br />
ServerAlias dev.*<br />
VirtualDocumentRoot "/srv/http/sandbox/%2+"<br />
</VirtualHost><br />
<br />
#stage root route<br />
<Virtualhost *:80><br />
ServerName stage<br />
DocumentRoot "/srv/http/stage"<br />
</VirtualHost><br />
<br />
#stage subdomains route<br />
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol<br />
<VirtualHost *:80><br />
ServerName stage.sub.com<br />
ServerAlias stage.*<br />
VirtualDocumentRoot "/srv/http/stage/%2+"<br />
</VirtualHost><br />
<br />
restart apache <br />
systemctl restart httpd<br />
#or if your still using initscripts<br />
rc.d restart httpd<br />
<br />
=== Create dirs === <br />
<br />
touch /srv/http/index.php<br />
mkdir /srv/http/sandbox<br />
touch /srv/http/sandbox/index.php<br />
mkdir /srv/http/stage<br />
touch /srv/http/stage/index.php<br />
<br />
note, in order for this to work now you will need to add your websites with the same dir name as they will be when deployed, i.e. use dots when dots exists, e.g. if website is google.com the dirname must be google.com, so your dev site will end up being /srv/http/sandbox/google.com so that the vhost will be able to map to it using http://dev.google.com<br />
<br />
=== Edit hosts files ===<br />
<br />
Assuming your development machine is not the same as the server, you need to make sure to add an entry for each site, during creation of the site, to your hosts file. <br />
<br />
i will use server ip 192.168.1.99 and hostname 'myserver' as example, and google.com as our website url example, use your own host name's and website url's in place of them. <br />
<br />
* linux <br />
add the following lines to /etc/hosts<br />
192.168.1.99 <br />
192.168.1.99 dev myserver<br />
192.168.1.99 stage myserver <br />
192.168.1.99 dev.google.com myserver<br />
192.168.1.99 stage.google.com myserver<br />
<br />
* windows<br />
<br />
navigate to the following file logged in as admin, <br />
%WINDIR%/system32/drivers/etc/hosts <br />
<br />
or run cmd as administrator, <br />
paste thie following line there<br />
notepad %WINDIR%/system32/drivers/etc/hosts <br />
<br />
add your sites to the hosts file<br />
192.168.1.99 dev<br />
192.168.1.99 stage<br />
192.168.1.99 dev.google.com<br />
192.168.1.99 stage.google.com<br />
#dev.anothersite.com<br />
#stage.anothersite.com<br />
#etc...<br />
<br />
* mac, you get the idea...<br />
<br />
== Git usage ==<br />
Install git, and make the following changes<br />
<br />
=== Set up Git ===<br />
<br />
* Create a central location for the git repos. (note, ill use a group called 'webteam' for example. This group separates what apache needs permission to from what a web developer needs permission to. each web developer is a member of this group) <br />
$ su<br />
$ mkdir -p /srv/http/repos/git<br />
$ chown http:webteam /srv/http/* -R<br />
$ chmod 775 /srv/http/* -R<br />
<br />
*sandbox push global configs<br />
you will probably get an error stating that you cant push to the branch that you cloned out of, more research needs to be done on this, but it seems pretty straight forward. Set the following config variable to squelch this.<br />
$ git config --global receive.denyCurrentBranch warn #you can set it to false instead of warn once your sure it doesnt cause any problems<br />
<br />
* stage push global configs<br />
Git version 2.0+ will start using a new push.default standard called "simple". "simple" indicates that the push will refuse if the upstreams branch name is different than the local one, which in this case it is. (website.com vs. stage.website.com) If your version is less than 2.0, which to date isn't out yet, then we can make it forward compatible. Set the following config variable to squelch this.<br />
$ git config --global push.default matching<br />
<br />
=== Add or enroll a new site ===<br />
Once your Git environment is setup all you have to do is start from this point each time you want to create a new website. <br />
<br />
* add a website (or see below if you are enrolling an existing site)<br />
$ mkdir -p /srv/http/sandbox/<website.com> # -p incase you dont have a sandbox dir yet<br />
*alternatively enroll an existing website<br />
To enroll an existing website, move it to the /srv/http/sandbox directory, being sure to rename it to conform to the actual address that it will be accessible via live (naming is important here, e.g. if your folder is named website-com, mv it to website.com. The .com portion is not mandatory if there is no domain name intended for it, just know that you will be accessing it in your browser exactly the same way as you name it, unless you muck around with the vhosts file to customize your own mapping strategy. )<br />
$ mv /home/<username>/public_html/website-com /srv/http/sandbox/website.com<br />
<br />
*continue here once you relocated your website or added a new website to the sandbox<br />
$ cd /srv/http/sandbox/website.com<br />
$ git init --separate-git-dir=/srv/http/repos/git/<website><br />
$ git add -A<br />
$ git commit -m 'comment here'<br />
<br />
If your going to actively develop this site, then continue below to stagify the site. If your just archiving an old site for later development you can stop here, delete the site folder ( e.g./srv/http/sandbox/website.com) then "recreate the worktree", as detailed below, from the repo and stageify at a later time.<br />
<br />
=== Stageify ===<br />
* create empty stage repo for this site (start here if you already have an existing site and repo substitute sandbox with your site location)<br />
$ mkdir -p /srv/http/repos/git/website.com.stage<br />
$ cd /srv/http/repos/git/website.com.stage<br />
$ git init --bare<br />
<br />
* make sites stage dir and create the hook (while still in the stage repo)<br />
$ mkdir /srv/http/stage/website.com<br />
$ cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/srv/http/stage/website.com git checkout -f<br />
<br />
#press ctrl +d to save and exit cat<br />
<br />
$ chmod +x hooks/post-receive<br />
<br />
* Define stage mirror and create a master branch tree (you MUST run both the following commands from the sandbox repo, the sandbox checkout NOR the stage repo will suffice)<br />
$ cd /srv/http/repos/git/website.com #note, important that your in this exact path, as opposed to website.com.stage (if your sandbox .git dir is somewhere else then cd INTO it)<br />
$ git remote add stage.website.com /srv/http/repos/git/website.com.stage #(note, stage.website.com is just a name, its what we will use to declare as teh repo name that we will call when we stage it)<br />
$ git push stage.website.com +master:refs/heads/master<br />
<br />
* the push to stage will be run simply from here forward, from within your root code base.. (i.e. you dont need to be in the .git dir anymore)<br />
$ git push stage.website.com<br />
See workflow below for more examples on this.<br />
<br />
* In case of upstream branch error when you run the push, you can do the following.<br />
Since we have globally configured the push.default to matching, an upstream branch error may occur. To squelch this error we need to indicate our upstream branch. Since we named out branch "stage.website.com" we issue the following command from our sandbox repo (/srv/http/repos/git/website.com)<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push --set-upstream stage.website.com master<br />
<br />
== General work flow ==<br />
<br />
Once you have your environment all setup, you can follow a general work flow pattern in order to take advantage of the full functionality.<br />
<br />
=== New development process ===<br />
<br />
* If you plan to just archive your site, you may want to delete it from the sandbox once its archived to the git repository using the steps above, since its nicely compressed and packed away into your repo dir. If this is the case then go head and delete your sandbox web folder at this point so we can begin the flow from the furthest possible point. (note, only do this step if you have your website commited to the git repo as outlined above, i.e. your .git folder is NOT located inside your code base dir), '''otherwise continue "Work on files" below'''.<br />
$ cd /srv/http<br />
$ rm website.com -rf<br />
<br />
You might want to delete your worktree after putting a project on the back-burner, and are not planning on working on it for long periods of time. Of you may have a crowded project directory and want to get your sanity back. If you deleted your worktree previously, or are restoring an old site from a repo for whatever reason, and your project is stored only inside the compressed repo, then you will need to re-create the worktree from the repo before working on your files. The following methods will guide you through doing this. <br />
<br />
==== recreate worktree ====<br />
<br />
Note, you wont need to do these steps if you still have your worktree in place and never deleted it. <br />
<br />
===== method 1 ===== <br />
(preferred method)<br />
<br />
One way to get the worktree back is to extract the worktree out of the repo, then attach it to the repos branch<br />
* make the new site folder inside your sandbox<br />
$ mkdir /srv/http/sandbox/website.com<br />
<br />
* switch to the new worktree directory and ''init'' for the first time<br />
$ cd /srv/http/sandbox/website.com<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
alternatively you can run those last two commands on one line if its easier for you<br />
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init && echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
<br />
* check the branch your on before pulling down the files<br />
$ git branch<br />
<br />
* switch to the repo and extract the files to the new site directory you just created. Make sure your checking out the branch you intend, normally that will be "master" unless you have branched the repo. Replace the word "master" in the following command if you are working on a different branch.<br />
$ cd /srv/http/repos/git/website.com<br />
$ git archive master | tar -x -C /srv/http/sandbox/website.com<br />
note, if you get an error in the above step <br />
could not switch to '/some/dir'': No Such File or directory<br />
then you may be trying to get the files back to a directory other than what was used when the repo was made, ''otherwise continue below''. <br />
<br />
If you did receive this error then you need to edit the config file in the repo to point to the new directory. To do this, open the ./config file in your favorite editor and edit the worktree path to point to the new site directory that you created.<br />
<br />
===== method 2 (alt method) ===== <br />
Another way to get the worktree back is to clone it out from the main repo, then delete the .git dir, then create the .git symlink. This way just seems a bit messy because one needs to delete the .git dir after unecessarily creating it, but as long as you replace it with the proper symlink it works fine. <br />
<br />
* clone the site to sandbox<br />
$ cd /srv/http/sandbox<br />
$ git clone /srv/http/repos/git/website.com<br />
* delete the hidden .git folder that was created inside this worktree<br />
$ rm -rf .git<br />
* recreate the .git symlink <br />
$ echo "gitdir: /srv/http/repos/git/website.com" > .git<br />
* checkout master to get things back on track (is this step even necessary?)<br />
$ git checkout master<br />
<br />
==== Work on files ====<br />
<br />
* Work on files<br />
$ cd website.com <br />
$ git status<br />
edit your files.....<br />
<br />
==== commit the changes ====<br />
<br />
$ git status #notice files are not staged for commit<br />
$ git add -A # or just individual files (e.g. git add file1.php file2.php) etc..<br />
$ git status # make sure the files you want to commit are listed to be committed<br />
$ git commit -m 'this is what i did to these files'<br />
$ git status # should come back clean<br />
$ git log # you can see your commit comments here<br />
<br />
visit http://dev.website.com to test the new functionality that was just committed<br />
<br />
If all looks well, you need to push your changes to the sandbox branch master(repo) that you cloned from. This accomplishes two things<br />
<br />
* liberates your sandbox clone to be deleted at will<br />
* updates branch master so the files are waiting to be pushed(deployed) to stage<br />
$ git push<br />
<br />
=== Deploy to stage ===<br />
Once your separate tasks have been individually commited, and you have thoroughly tested then in the sandbox environment, you can now deploy to the stage environment for final q/a before going live. <br />
<br />
*deploy to stage<br />
$ cd /srv/http/repos/git/website.com<br />
$ git push stage.website.com<br />
<br />
now visit http://stage.website.com to do your final q/a<br />
<br />
If any mistakes are spotted on stage, i.e. if everything is not working as expected, then your stage environment just paid for itself. Follow the below steps for damage control to revert changes back on stage <br />
<br />
=== Damage control ===<br />
<br />
If mistakes are spotted on stage, we need to revert our changes<br />
<br />
* @todo: commands for reverting<br />
* @todo: workflow<br />
<br />
<br />
now that stage has been reset to the last-known-good, go back to the sandbox and edit<br />
<br />
== Why separate repository from web folder? ==<br />
<br />
For me, it just makes sense. The beauty of it is you can bulk delete your all or any of your web files in the sandbox area when they start to get in the way. When your ready to work on that site again, you can just "recreate the worktree".<br />
<br />
It may also help to have the repos outside of the web folders if you run any type of incremental backups, e.g. rsnapshot, or rsync. You can choose to backup only the repos, which are compressed, or to backup everything except sandbox for example. Mainly the fact that you can delete your sandbox websites without losing the repo is the main reason.<br />
<br />
== Credits ==<br />
Thanks to <br />
* Abhijit Menon-Sen at http://toroid.org/ams/git-website-howto which is where all my google searches finally landed me and payed-off as to what i was trying to accomplish.<br />
<br />
* niks http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked for getting <br />
and<br />
Charles Bailey http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked <br />
for getting me over the hump of getting the worktree properly out of a repo archive<br />
<br />
= ZFS-FUSE Implementation =<br />
<br />
Im trying to utilize the ZFS filesystem to create a flexible array of different sized drives that will serve as a data-backup volume shared across a samba network. ZFS can be taken advantage of here because of its flexibility using storage pools. ZFS also has some of the best features catering to data integrity. One drawback is some of this is done at the expense of file transfer rates. There are some things you can do to offset this however by having small backup drives, even usb stick or just about any other type of media serving as a cache drive. Speed will not be a consideration on this array since its main role is data integrity and safety. <br />
<br />
The test array initially used here was an array of 3 dives, one 2Tb, and 2 500GB's. LVM was also explored to find ways to span drives into one large volume. <br />
<br />
== Installation ==<br />
<br />
1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory. <br />
<br />
went to AUR, copy and pasted the PKGBUILD contents into a new PKGBUILD file in my packages directory /home/wolfdogg/packages/packages/yaourt/PKGBUILD<br />
ran the makepkg -s from that folder. after battling a successtion of errors (invalid signatures in pacman, needed to re-init pacman-key, then had to create new group called sudo, and add my user to it, then edit the /etc/sudoers file accordingly, which seemed like the best way to get my user into sudoers, and eliminate the risk of accidentally doing something from root during the makepkg process)<br />
and finally, pacman -U yaourt<br />
<br />
2)Then i followed the steps on the wiki on https://wiki.archlinux.org/index.php/ZFS_on_FUSE . For myself, the NFS portion got a bit confusing here https://wiki.archlinux.org/index.php/ZFS_on_FUSE#NFS_shares . Here in this section of the wiki [code]zfs set sharenfs=[/code] i got a bit side-tracked and will come back to it at a later time. <br />
<br />
I found the manual for Solaris ZFS here [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/] and it looks like it will provide all the info needed, especially this section of it [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch04s03.html]<br />
<br />
Now would be a good time to read the manuals<br />
#man zpool<br />
#man zfs<br />
<br />
== Inventory drives ==<br />
<br />
*Hook up any drives that you want to add to your new filesystem. <br />
<br />
*Take an inventory of your current drives and partitios. Note any existing arrays and partitions. See md0 (mdadm array), zfs-kstat/pool (a zfs pool), and /pool/backup (a dataset) in the example below.<br />
<br />
*View the drive information<br />
# blkid -o list -c /dev/null<br />
<br />
*List the partitions, and inspect your mounts<br />
# lsblk -f<br />
<br />
sdb 8:16 0 1.8T 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
sdc 8:32 0 465.8G 0 disk<br />
└─md0 9:0 0 2.7T 0 linear<br />
<br />
*Use one of the following for each disk if you want to view partition tables and or reformat your drives:<br />
*If you have a MBR partition table <br />
#fdisk -l<br />
*If you have GPT partition table<br />
#gdisk -l /dev/sdb<br />
<br />
*View the mounts<br />
# findmnt<br />
<br />
TARGET SOURCE FSTYPE OPTIONS<br />
/zfs-kstat kstat fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other<br />
└─/pool pool fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
└─/pool/backup pool/backup fuse.zfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other<br />
<br />
== Prepare Drives ==<br />
If you already know what your doing, you can use cgdisk, here is a nice link for cgdisk reference [http://www.rodsbooks.com/gdisk/cgdisk-walkthrough.html]. Otherwise, follow below to see a gdisk example walkthrough.<br />
# gdisk<br />
Type device or filename<br />
We will use /dev/sdb for this example<br />
# /dev/sdb<br />
Command:<br />
o will create a new partition table<br />
# o<br />
Proceed? <br />
This option will delete all partition and create a new protective MBR<br />
# y<br />
Command:<br />
n will create a new partition<br />
# n<br />
Partition number<br />
We should use 1 if its the first<br />
# 1<br />
Press enter for default first sector<br />
Press enter for default last sector<br />
Hexcode: <br />
# L<br />
L Shows all codes, choose a filesysetm hex code and type it in<br />
We will go with bf00 (Solaris root) for this example<br />
# bf00<br />
Now lets write partition to disk<br />
COmmand:<br />
# w<br />
Final checks complete<br />
Do you want to proceed<br />
# y<br />
<br />
Now your disk has a new clean partition table<br />
<br />
Here are some steps to follow, so far its all i have.<br />
<br />
=== Create the Zpool ===<br />
<br />
Create the pool (notice im not using raidz here, you may want to if your drives are all the same size)<br />
# zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd /dev/sde<br />
*Also note, you should create two pools if you have 4 or more drives so that the redundancy portion that zfs is so well known for will work properly.<br />
<br />
"pool" is the name of the current test pool, and the mount point.<br />
<br />
'''RAIDZ1, RAIDZ2'''<br />
<br />
*To create a RAIDZ if you have atleast 3 drives of the same size <br />
*'raidz' is an alias for 'raidz1', or you can use 'raidz2' if you have an extra drive for an extra stripe for extra redundancy.)<br />
# zpool create pool raidz /dev/sdb /dev/sdc /dev/sdd<br />
<br />
'''Alternatively, create a raid span.''' Note, no raid type has been specified (disk, file, raidz, etc...)<br />
# zpool create pool /dev/sdb /dev/sdc /dev/sdd<br />
<br />
(see below to create an mdadm linear span instead (jbod)) <br />
<br />
if you do some testing and get stuck with a drive reporting unavailable after you hook it back up, sometimes i have to reboot which i probably just don't know enough about yet, but some commands that have been helpful<br />
<br />
Get the list and size of the pool<br />
# zpool list<br />
<br />
To get the status<br />
#zpool status<br />
<br />
=== Create the ZFS Filesystem Hierarchy (dataset) ===<br />
<br />
Lets create a dataset <br />
# zfs create pool/backup<br />
# zfs set mountpoint=/backup pool/backup<br />
<br />
you can create a child file system inside the parent, the child file system is would automatically be inherited <br />
# zfs create pool/backup/<computer_name><br />
<br />
Per the manual, ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /dev/vfstab)<br />
<br />
=== List and inspect ===<br />
<br />
List and inspect your new zfs file system<br />
# zfs list<br />
NAME USED AVAIL REFER MOUNTPOINT<br />
pool 106K 2.68T 21K /pool<br />
pool/backup 21K 2.68T 21K /pool/backup<br />
<br />
Now lets look at all drives to see what this looks like. View mount status and disk size<br />
# mount -l<br />
<br />
# df<br />
Filesystem 1K-blocks Used Available Use% Mounted on<br />
rootfs 15087420 2356784 11962328 17% /<br />
dev 1991168 0 1991168 0% /dev<br />
run 2027072 316 2026756 1% /run<br />
/dev/sda3 15087420 2356784 11962328 17% /<br />
shm 2027072 0 2027072 0% /dev/shm<br />
tmpfs 2027072 28 2027044 1% /tmp<br />
/dev/sda4 95953460 84172112 6904016 93% /home<br />
/dev/sda1 99550 19445 74886 21% /boot<br />
pool 2873622443 23 2873622420 1% /pool<br />
pool/backup 2873622441 21 2873622420 1% /pool/backup<br />
<br />
To learn more about your coinfiguration options run the following command<br />
# zfs get all | less<br />
<br />
=== Destroy Array and Pools ===<br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
If you need to break down your zpool or zfs datasets follow below. <br />
<br />
Deactivate the array using mdadm RAID manager (unmount)<br />
# mdadm -S /dev/md0<br />
<br />
I chose to delete the line in /etc/mdadm.conf as well. not sure if it was needed but it seemed that there was still bits and pieces lying around that needed removing<br />
# vim /etc/mdadm.conf<br />
<br />
To destroy a dataset in the pool <br />
#zfs destroy <filesystemvolume><br />
<br />
Or you can destroy the entire pool<br />
#zpool destroy <pool><br />
<br />
If you cant totaly destroy the pool, or are trying to create a new pool with the same name its possible to trace clues about what process is using it using <br />
#fuser /pool -a <br />
<br />
Then run top to find that process PID<br />
#top<br />
<br />
Or run lsof<br />
# lsof | zfs-fuse | less<br />
Go from there....<br />
<br />
Now move on to [[#Linear_RAID_.28jbod.29_filesystem_using_mdadm_.2F_lvm]] or [[#RAIDZ_Filesystem_Configuration]]<br />
<br />
== Linear RAID (jbod) filesystem using mdadm / lvm ==<br />
<br />
I decided to explore other options until i can figure out wether the ZFS span is set up properly using ZFS. <br />
<br />
=== Create the array ===<br />
<br />
Destroy existing zfs pool and break down the array where applicable. use this as a guide <br />
[[#Destroy_Array_and_Pools]]<br />
<br />
Use mdadm to create the span<br />
# mdadm --create /dev/md0 --level=linear --raid-devices=3 /dev/sd[bcd]<br />
<br />
Get the status detail using mdadm <br />
<br />
# mdadm --misc --detail /dev/md0<br />
/dev/md0:<br />
Version : 1.2<br />
Creation Time : Mon Jun 25 13:52:31 2012<br />
Raid Level : linear<br />
Array Size : 2930286671 (2794.54 GiB 3000.61 GB)<br />
Raid Devices : 3<br />
Total Devices : 3<br />
Persistence : Superblock is persistent<br />
Update Time : Mon Jun 25 13:52:31 2012<br />
State : clean<br />
Active Devices : 3<br />
Working Devices : 3<br />
Failed Devices : 0<br />
Spare Devices : 0<br />
Rounding : 0K<br />
Name : falcon:0 (local to host falcon)<br />
UUID : 141f34d0:2b2c0973:4a6f070b:17b772ec<br />
Events : 0<br />
Number Major Minor RaidDevice State<br />
0 8 16 0 active sync /dev/sdb<br />
1 8 32 1 active sync /dev/sdc<br />
2 8 48 2 active sync /dev/sdd<br />
<br />
=== Create volume and groups ===<br />
<br />
Create the physical volume<br />
# pvcreate /dev/md0<br />
<br />
Display the volume<br />
# pvdisplay<br />
"/dev/md0" is a new physical volume of "2.73 TiB"<br />
--- NEW Physical volume ---<br />
PV Name /dev/md0<br />
VG Name<br />
PV Size 2.73 TiB<br />
Allocatable NO<br />
PE Size 0<br />
Total PE 0<br />
Free PE 0<br />
Allocated PE 0<br />
PV UUID 1Hr3ay-L0mZ-33GD-4ZeM-EOtW-lkAz-M4REMJ<br />
<br />
Create the volume group<br />
# vgcreate VolGroupArray /dev/md0<br />
Volume group "VolGroupArray" successfully created<br />
<br />
Display the volume groups<br />
# vgdisplay<br />
--- Volume group ---<br />
VG Name VolGroupArray<br />
System ID<br />
Format lvm2<br />
Metadata Areas 1<br />
Metadata Sequence No 1<br />
VG Access read/write<br />
VG Status resizable<br />
MAX LV 0<br />
Cur LV 0<br />
Open LV 0<br />
Max PV 0<br />
Cur PV 1<br />
Act PV 1<br />
VG Size 2.73 TiB<br />
PE Size 4.00 MiB<br />
Total PE 715401<br />
Alloc PE / Size 0 / 0<br />
Free PE / Size 715401 / 2.73 TiB<br />
VG UUID 2OAWpT-fO50-A7cW-jUjd-meQh-sWQa-7b55ZO<br />
<br />
Create the logical volume, i used 2.725 because 2.73 failed<br />
# lvcreate VolGroupArray -L 2.725T -n backup<br />
<br />
Display volume<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
# lvdisplay<br />
--- Logical volume ---<br />
LV Path /dev/VolGroupArray/backup<br />
LV Name backup<br />
VG Name VolGroupArray<br />
LV UUID Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW<br />
LV Write Access read/write<br />
LV Creation host, time falcon, 2012-06-25 14:33:00 -0700<br />
LV Status available<br />
# open 0<br />
LV Size 2.73 TiB<br />
Current LE 714343<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 256<br />
Block device 253:0<br />
<br />
Check the status <br />
# cat /proc/mdstat<br />
Personalities : [linear]<br />
md0 : active linear sdd[2] sdc[1] sdb[0]<br />
2930286671 blocks super 1.2 0k rounding<br />
unused devices: <none><br />
<br />
''@TODO Need advice here - this portion needs testing, i got stuck here last time it tried this.''<br />
<br />
== Recreating a ZFS storage pool ==<br />
<br />
From my understanding one can add a drive, to a vdev (Virtual Device, or array set) without losing data, if its a linear span, but you cant remove a drive from the vdev without first copying the files and then deleting the pool then recreating it. This portion will walk you through completely tearing down the array, freeing up the drives, and recreating either a mdadm jbod linear span or a RAIDZ. <br />
<br />
'''WARNING - Backup all your data before beginning'''<br />
<br />
=== List the pools ===<br />
<br />
get a list of the current pools<br />
#zpool list<br />
<br />
check status <br />
# zpool status<br />
<br />
list datasets<br />
#zfs list<br />
<br />
== Share mount to network using samba ==<br />
<br />
Configure samba to give user access to the share (good for network backups from any operating system, including windows backup) <br />
*add the following entry to /etc/samba/smb.conf and restart the samba daemon<br />
[backup]<br />
comment = backup drive<br />
path = /pool/backup<br />
valid users = user1,user2<br />
read only = No<br />
create mask = 0765<br />
wide links = Yes<br />
<br />
#rc.d restart samba<br />
<br />
modify the permissions on pool/backup so they are accesible over the network, i decided to add 'user' group accessibility<br />
# chown root:users backup<br />
# chmod g+w backup<br />
<br />
Alternatively<br />
# chown root:root backup/<br />
# chmod 755 backup/<br />
# chown root:users backup/<dataset1><br />
# chmod 775 backup/<dataset1><br />
<br />
== ZFS RAID Maintenance ==<br />
<br />
@TODO - this section is not finished<br />
<br />
==== Maintenance ====<br />
<br />
*To place a disk back online (see manual for this)<br />
#zpool online<br />
<br />
*To replace a disk (see manual for this)<br />
#zpool replace<br />
<br />
* If your pool goes down, i.e. one of your drives goes offline and not enough drives to complete replication you can try the following<br />
- Check if the drive is initiated, you should see it in the list thats returned from running the following command<br />
# blkid<br />
If its not in the list:<br />
- Check all cable connections, the drive may not be mounted. A reboot may be needed. <br />
- Once you get the drive to appear then run following commands.<br />
# zpool export <pool><br />
# zpool import <pool><br />
# zpool list<br />
If you get an error message <br />
"cannot import 'pool': one of more devices is currently unavailable. Destroy and re-create the pool from a backup source" try to export then import again.<br />
@todo more information is needed before suggestions are made at this point.<br />
<br />
== Help Needed ==<br />
<br />
If anybody has any suggestions, please chime in. <br />
<br />
I havent figured out how to adress the size of the array yet. For example, when i hook up only the 2TB with the 500GB as a raidz, it will let me, and the size reports 1.36TB. When i destroyed the pool and rebuilt using all 3 drives (2TB,500GB,500GB) the size still reports 1.36TB<br />
<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 00:18, 25 June 2012 (UTC)<br />
<br />
= volnoti on KDE using alsamixer =<br />
<br />
@todo this script is for alsamixer, and kde, more needs to be added to support other environments<br />
<br />
<br />
create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.<br />
<br />
insert the following contents in this script<br />
#!/bin/bash<br />
<br />
#this script is made for volnoti<br />
<br />
# Configuration<br />
STEP="2" # Anything you like.<br />
UNIT="dB" # dB, %, etc.<br />
<br />
# Set volume<br />
SETVOL="/usr/bin/amixer -qc 0 set Master"<br />
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"<br />
<br />
case "$1" in<br />
"up")<br />
$SETVOL $STEP$UNIT+<br />
;;<br />
"down")<br />
$SETVOL $STEP$UNIT-<br />
;;<br />
"mute")<br />
$SETVOL toggle<br />
;;<br />
esac<br />
<br />
# Get current volume and state<br />
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')<br />
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")<br />
<br />
# Show volume with volnoti<br />
if [[ -n $STATE ]]; then<br />
volnoti-show -m<br />
else<br />
volnoti-show $VOLUME<br />
# If headphone is being used, mute is treated a bit differently when muted. Make sure headphones follows master mute.<br />
amixer -c 0 set Headphone unmute<br />
amixer -c 0 set Speaker unmute<br />
amixer -qc 0 set Speaker 100%<br />
fi<br />
<br />
exit 0<br />
<br />
Note, the broken line above is this, i guess the wiki mishandles brackets in code<br />
<br />
if [[ -n $STATE ]]; then<br />
<br />
ok, so its creating a link here too, ok let me spell it out for you<br />
<br />
if left bracket, left bracket, -n $STATE right bracket, right bracket; then <br />
<br />
<br />
save this script then set permissions<br />
<br />
#chown root:users /usr/local/bin/sound.sh<br />
#chmod 755 /usr/local/bin/sound.sh<br />
<br />
== xbindkeys ==<br />
<br />
install xbindkeys so that you can control volume with your keyboard<br />
pacman -S xbindkeys<br />
<br />
Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys<br />
<br />
# increase volume<br />
"sh /usr/local/bin/sound.sh up"<br />
m:0x0 + c:75<br />
F9<br />
<br />
# Decrease volume<br />
"sh /usr/local/bin/sound.sh down"<br />
m:0x0 + c:74<br />
F8<br />
<br />
# Toggle mute<br />
"sh /usr/local/bin/sound.sh mute"<br />
m:0x0 + c:73<br />
F7<br />
<br />
#"amixer set Master playback 1+"<br />
<br />
== autostart ==<br />
<br />
Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh<br />
#!/bin/bash<br />
xbindkeys<br />
volnoti<br />
<br />
Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer. <br />
<br />
Enjoy.<br />
<br />
= smbclient media stream issues using dolphin when accessing windows shares =<br />
<br />
== description ==<br />
I was having access problems when trying to access files via smbclient, or samba as a client, accessing shares on a win7 file server, '''only''' when the user exists on the windows machine. It turns out the problem is possibly either a complicated mounting issue, or a deep bug in dolphin. <br />
<br />
Before i delve into this, i also wanted to mention, the windows machine was set to not share files as "password protected sharing" http://www.sevenforums.com/tutorials/185429-password-protected-sharing-turn-off-windows-7-a.html. If you DO share with windows password protected sharing, when you acccess the samba shares through dolphin, it will issue a popup asking you for a password, even if you put in valid credentials you will still experience the issue, so dont spend too much time debugging by turning on and off password protected sharing as it didnt seem to help either way.<br />
<br />
== the bug reproduced ==<br />
The way i was accessing the shares was through dolphin, by clicking on on the "Network" places, then by clicking into the "Samba" symlink. There i would see a list of workgroups, click into those to access my files, and so forth. I would find a directory i wanted to add to My "Places", i would right click on it and add to my places. <br />
<br />
When i would access these media files through either the symlink i created in my Places, or through the existing Network symlink in my places, once i navigated to a video file, .avi in this case, no matter if i would choose VLC, or mplayer, the system would need to cache in full before it would play. This meant instead of the video starting right away, i would have to wait sometimes 10-15 minutes before the video would start, or i would get an error(VLC is unable to open the MRL 'smb://<server>/UserFiles/PublicArchive/movie.avi), depending on how the windows password protect was set, or if i was using vlc vs. mplayer, etc... Obviously something was wrong. Now i'm sure these symptoms ran much deeper than just video files, for example, i remember it happening to audio files, but i think it will probably happen to even text files or any file thats notible large enough to take more than 3 seconds to access across a network. <br />
<br />
Important note, this only happens when im logged into KDE as a user that already exists on the windows system im trying to access the publicly shared files. If i were to log in to KDE as a user that doesn't exist the files would not have to cache, but instead would immediately start to stream as expected. <br />
<br />
Something really buggy is going on at this point. So i thought maybe i would go into the KDE system settings > sharing and set the default username and password but this didn't help. i tried several things, including re-installing network driver, re-installing kde over the top of itself, userdel the user, clean out the home directory, nothing worked. <br />
<br />
So once again, as i have done in the past to try to solve this issue, i decided i would try to manually mount the thing again and gain access differently that using the network icon in dolphin. This time i was following the wiki as usual, and i got to the part about manually mounting shares where i stumbled on one line that mentioned the /mnt directory that i have seen so many times before. This time the cards were lined up right i guess because i decided to click through "Root" ((in places) using dolphin, then navigated my way through /mnt/smbnet and onto my files this way where i discovered they play no problem (doesnt need to cache, starts streaming immediately).<br />
<br />
== the fix ==<br />
It appears that the symlink "Network" in Dolphin 'Places' bar, at least the way its currently set up in my version of kde4, is there to make life miserable. Dont use it if you dont want to have to fully download the file before your system will have access to it. Don't access your network shares this way if they are on a samba share. Instead, navigate through Dolphins "Root" /mnt/smbnet/<your-workgroup>/<your-server>/<your-fileshares> instead, and right click on one of those folders and "Add To Places", then you will have proper a proper symlink on your "Places" to access through /mnt.<br />
<br />
= steps taken to repair Intel IbexPeak HDMI / IDT 92HD81B1X5 internal mic =<br />
<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=F1734<br />
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#2 | grep Codec<br />
cat: /proc/asound/card0/codec#2: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#1 | grep Codec<br />
cat: /proc/asound/card0/codec#1: No such file or directory<br />
[root@osprey wolfdogg]# cat /proc/asound/card0/codec<br />
cat: /proc/asound/card0/codec: No such file or directory<br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card<br />
card0/ cards <br />
[root@osprey wolfdogg]# cd /proc/asound/card0/<br />
[root@osprey card0]# ll<br />
total 0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 codec#3<br />
-rw-r--r-- 1 root root 0 Apr 6 00:49 eld#3.0<br />
-r--r--r-- 1 root root 0 Apr 6 00:49 id<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0c<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm0p<br />
dr-xr-xr-x 3 root root 0 Apr 6 00:49 pcm3p<br />
[root@osprey card0]# cat /proc/asound/card0/codec#0 | grep Codec<br />
Codec: IDT 92HD81B1X5<br />
[root@osprey card0]# cat /proc/asound/card0/codec#3 | grep Codec<br />
Codec: Intel IbexPeak HDMI<br />
[root@osprey card0]# cat /proc/asound/card0/eld#3.0 | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/id | grep Codec<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec<br />
cat: /proc/asound/card0/pcm0c: Is a directory<br />
[root@osprey card0]# pacman -S gstreamer0.10-plugins<br />
:: There are 5 members in group gstreamer0.10-plugins:<br />
:: Repository extra<br />
1) gstreamer0.10-bad-plugins 2) gstreamer0.10-base-plugins 3) gstreamer0.10-ffmpeg<br />
4) gstreamer0.10-good-plugins 5) gstreamer0.10-ugly-plugins <br />
<br />
Enter a selection (default=all): <br />
warning: gstreamer0.10-bad-plugins-0.10.23-3 is up to date -- reinstalling<br />
warning: gstreamer0.10-base-plugins-0.10.36-1 is up to date -- reinstalling<br />
warning: gstreamer0.10-ffmpeg-0.10.13-1 is up to date -- reinstalling<br />
resolving dependencies...<br />
looking for inter-conflicts...<br />
<br />
Targets (10): gstreamer0.10-ugly-0.10.19-5 libavc1394-0.5.4-1 libiec61883-1.2.0-3<br />
libsidplay-1.36.59-5 wavpack-4.60.1-2 gstreamer0.10-bad-plugins-0.10.23-3<br />
gstreamer0.10-base-plugins-0.10.36-1 gstreamer0.10-ffmpeg-0.10.13-1<br />
gstreamer0.10-good-plugins-0.10.31-1 gstreamer0.10-ugly-plugins-0.10.19-5<br />
<br />
Total Download Size: 1.00 MiB<br />
Total Installed Size: 13.08 MiB<br />
Net Upgrade Size: 3.63 MiB<br />
<br />
Proceed with installation? [Y/n] <br />
:: Retrieving packages from extra...<br />
gstreamer0.10-base-plugi... 165.3 KiB 963K/s 00:00 [#############################] 100%<br />
libavc1394-0.5.4-1-x86_64 32.0 KiB 759K/s 00:00 [#############################] 100%<br />
libiec61883-1.2.0-3-x86_64 37.3 KiB 829K/s 00:00 [#############################] 100%<br />
wavpack-4.60.1-2-x86_64 113.7 KiB 921K/s 00:00 [#############################] 100%<br />
gstreamer0.10-good-plugi... 327.3 KiB 1124K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-0.10.... 160.4 KiB 908K/s 00:00 [#############################] 100%<br />
libsidplay-1.36.59-5-x86_64 107.8 KiB 771K/s 00:00 [#############################] 100%<br />
gstreamer0.10-ugly-plugi... 84.4 KiB 727K/s 00:00 [#############################] 100%<br />
(10/10) checking package integrity [#############################] 100%<br />
(10/10) loading package files [#############################] 100%<br />
(10/10) checking for file conflicts [#############################] 100%<br />
(10/10) checking available disk space [#############################] 100%<br />
( 1/10) upgrading gstreamer0.10-bad-plugins [#############################] 100%<br />
( 2/10) upgrading gstreamer0.10-base-plugins [#############################] 100%<br />
( 3/10) upgrading gstreamer0.10-ffmpeg [#############################] 100%<br />
( 4/10) installing libavc1394 [#############################] 100%<br />
( 5/10) installing libiec61883 [#############################] 100%<br />
( 6/10) installing wavpack [#############################] 100%<br />
( 7/10) installing gstreamer0.10-good-plugins [#############################] 100%<br />
<br />
(gconftool-2:5520): GConf-WARNING **: Client failed to connect to the D-BUS daemon:<br />
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. <br />
( 8/10) installing gstreamer0.10-ugly [#############################] 100%<br />
( 9/10) installing libsidplay [#############################] 100%<br />
(10/10) installing gstreamer0.10-ugly-plugins [#############################] 100%<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto<br />
<br />
= Git Remote Development =<br />
<br />
==Configure remote==<br />
===Set up remote repos===<br />
<br />
*Set up the bare repo and public served repo (for websites)<br />
<br />
$ ssh user@remote<br />
# mkdir /home/user/git/site.com.git (or /var/www/git/site.com etc.. if not shared hosting.)<br />
# cd /home/user/git/site.com.git<br />
# git init --bare --shared (or not shared)<br />
<br />
*Choose only option a or option b in the following steps<br />
<br />
*Now chooose one of the following, either a or b, dont do them both. <br />
**a) If you want to always edit files locally and not have to do a pull from served location, which pretty much automates the file uploads upon push. <br />
**b) If you want the ability to also edit files directly on the served location as well then you might want to keep them in their own git repository, therefore you will be cloning the bare repo out to the sites served directory. Each time you do a push from your local, you will then need to do a pull from this remote cloned repo after each commit before you will see your changes live. <br />
<br />
===a) Post-receive hooks for automation===<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server''' <br />
<br />
*Make post-receive hook in bare repo which will ship files automatically into its served location<br />
<br />
# cat > hooks/post-receive<br />
#!/bin/sh<br />
GIT_WORK_TREE=/home/soldiert/public_html/project.com/ git checkout -f <br />
// (press ctrl_d to exit and save)<br />
<br />
# chmod +x hooks/post-receive (same path as what you put in hooks/post-receive)<br />
# mkdir /home/user/git/site.com<br />
# exit<br />
<br />
===b) Clone for more control====<br />
<br />
'''First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server'''<br />
<br />
*Clone remote bare into directory that it will be served out of to gain more control over editing from both locations. <br />
<br />
# cd /home/user/public_html<br />
# clone /home/user/git/site.com.git <br />
# exit<br />
<br />
=== Start tracking on local===<br />
*Start tracking on local if you haven't already<br />
$ cd ~/projects/site.com<br />
# git --init<br />
<br />
===Now set up connection to remote origin on your local git repo===<br />
$ cd ~/projects/site.com<br />
$ git remote origin -v <br />
<br />
*Add your bare repo as the new origin. <br />
*Note, if you your origin is already in tact then skip the remove and add origin steps below<br />
$ git remote rm origin (if its still attached to git hub or someplace else that you dont want the files going)<br />
$ git remote add origin user@remote:/home/user/public_html/site.com/.git<br />
<br />
*Push your files<br />
$ git push origin master (will push to bare repo, then hook will check it out to served location. <br />
<br />
===Ready to develop on local=== <br />
$ cd ~/projects/site.com<br />
$ vim index.html<br />
$ git add -A<br />
$ git commit -am 'first commit message'<br />
$ git status<br />
$ git log<br />
<br />
===After commiting run the following===<br />
$ git push<br />
<br />
*Your'e done. <br />
<br />
*If you chose option b then you now you need to run the following commands<br />
$ ssh user@remote<br />
# cd /home/user/public_html/site.com<br />
# git pull<br />
*Your'e done.<br />
<br />
= PHP X-Debug=<br />
<br />
This topic covers installing X-Debug on a LAMP server.<br />
<br />
== Installation ==<br />
pacman -S xdebug<br />
== Configuration ==<br />
* add the following line to your php.ini on the bottom of the extensions list<br />
zend_extension="/lib64/php/modules/xdebug.so"<br />
* or use the non 64 bit one if needed<br />
zend_extension="/lib/php/modules/xdebug.so"<br />
* restart your server<br />
systemctl restart httpd<br />
* ensure x-debug is enabled in phpinfo<br />
php -i | grep xdebug | less<br />
<br />
== PHPStorm usage ==<br />
This step details configuring xdebug for use in development on a development machine, separate from the LAMP server, which has PHP-Storm installed. <br />
== References ==<br />
<br />
xdebug docs<br />
http://xdebug.org/docs/<br />
<br />
xdebug checker http://xdebug.org/find-binary.php<br />
<br />
troubleshooting info<br />
http://stackoverflow.com/questions/20752260/trouble-setting-up-and-debugging-php-storm-project-from-existing-files-in-mounte<br />
<br />
phpstorm zero configuration<br />
http://blog.jetbrains.com/phpstorm/2013/07/webinar-recording-debugging-php-with-phpstorm/<br />
<br />
phpstorm configurations<br />
https://www.jetbrains.com/phpstorm/webhelp/configuring-xdebug.html</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=OpenLDAP&diff=370358OpenLDAP2015-04-20T17:32:26Z<p>Wolfdogg: /* The server */</p>
<hr />
<div>[[Category:Networking]]<br />
[[ja:openLDAP]]<br />
[[ru:openLDAP]]<br />
[[zh-cn:OpenLDAP]]<br />
{{Related articles start}}<br />
{{Related|LDAP Authentication}}<br />
{{Related|LDAP Hosts}}<br />
{{Related articles end}}<br />
<br />
OpenLDAP is an open-source implementation of the LDAP protocol. An LDAP server basically is a non-relational database which is optimised for accessing, but not writing, data. It is mainly used as an address book (for e.g. email clients) or authentication backend to various services (such as Samba, where it is used to emulate a domain controller, or [[LDAP Authentication|Linux system authentication]], where it replaces {{ic|/etc/passwd}}) and basically holds the user data.<br />
<br />
Commands related to OpenLDAP that begin with {{ic|ldap}} (like {{ic|ldapsearch}}) are client-side utilities, while commands that begin with {{ic|slap}} (like {{ic|slapcat}}) are server-side.<br />
<br />
Directory services are an enormous topic. Configuration can therefore be complex. This page is a starting point for a basic OpenLDAP installation and a sanity check. If you are totally new to those concepts, [http://www.brennan.id.au/20-Shared_Address_Book_LDAP.html this] is an good introduction that is easy to understand and that will get you started, even if you are new to everything LDAP.<br />
<br />
== Installation ==<br />
<br />
OpenLDAP contains both a LDAP server and client. Install it with the package {{Pkg|openldap}}, available in the [[official repositories]].<br />
<br />
== Configuration ==<br />
<br />
=== The server ===<br />
<br />
{{Note|If you already have an OpenLDAP database on your machine, remove it by deleting everything inside {{ic|/var/lib/openldap/openldap-data/}}.}}<br />
<br />
The server configuration file is located at {{ic|/etc/openldap/slapd.conf}}.<br />
<br />
Edit the suffix and rootdn. The suffix typically is your domain name but it does not have to be. It depends on how you use your directory. We will use ''example'' for the domain name, and ''com'' for the tld. The rootdn is your LDAP administrator's name (we will use ''root'' here).<br />
{{bc|<nowiki><br />
suffix "dc=example,dc=com"<br />
rootdn "cn=root,dc=example,dc=com"<br />
</nowiki>}}<br />
<br />
Now we delete the default root password and create a strong one:<br />
# sed -i "/rootpw/ d" /etc/openldap/slapd.conf #find the line with rootpw and delete it<br />
# echo "rootpw $(slappasswd)" >> /etc/openldap/slapd.conf #add a line which includes the hashed password output from slappasswd<br />
<br />
You will likely want to add some typically used [http://www.openldap.org/doc/admin24/schema.html schemas] to the top of {{ic|slapd.conf}}:<br />
cp /usr/share/doc/samba/examples/LDAP/samba.schema /etc/openldap/schema<br />
{{bc|<br />
include /etc/openldap/schema/cosine.schema<br />
include /etc/openldap/schema/inetorgperson.schema<br />
include /etc/openldap/schema/nis.schema<br />
include /etc/openldap/schema/samba.schema<br />
}}<br />
<br />
You will likely want to add some typically used [http://www.openldap.org/doc/admin24/tuning.html#Indexes indexes] to the bottom of {{ic|slapd.conf}}:<br />
{{bc|<br />
index uid pres,eq<br />
index mail pres,sub,eq<br />
index cn pres,sub,eq<br />
index sn pres,sub,eq<br />
index dc eq<br />
}}<br />
<br />
Now prepare the database directory. You will need to copy the default config file and set the proper ownership:<br />
# cp /etc/openldap/DB_CONFIG.example /var/lib/openldap/openldap-data/DB_CONFIG<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/DB_CONFIG<br />
<br />
{{Note|With OpenLDAP 2.4 the configuration of {{ic|slapd.conf}} is deprecated. From this version on all configuration settings are stored in {{ic|/etc/openldap/slapd.d/}}.}}<br />
<br />
To store the recent changes in {{ic|slapd.conf}} to the new {{ic|/etc/openldap/slapd.d/}} configuration settings, we have to delete the old configuration files first, do this every time you change the configuration:<br />
<br />
# rm -rf /etc/openldap/slapd.d/*<br />
<br />
<br />
(if you do not have a database yet, you might need to create one by starting and stopping the {{ic|slapd.service}} [[systemd#Using units|using systemd]] )<br />
<br />
Then we generate the new configuration with:<br />
<br />
# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/<br />
<br />
The above command has to be run every time you change {{ic|slapd.conf}}. Check if everything succeeded. Ignore message "bdb_monitor_db_open: monitoring disabled; configure monitor database to enable". <br />
<br />
Change ownership recursively on the new files and directory in /etc/openldap/slapd.d:<br />
<br />
# chown -R ldap:ldap /etc/openldap/slapd.d<br />
<br />
{{note|Index the directory after you populate it. You should stop slapd before doing this.<br />
# slapindex<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/*<br />
}}<br />
<br />
Finally, start the slapd daemon with {{ic|slapd.service}} using systemd.<br />
<br />
=== The client ===<br />
The client config file is located at {{ic|/etc/openldap/ldap.conf}}. <br />
<br />
It is quite simple: you will only have to alter {{ic|BASE}} to reflect the suffix of the server, and {{ic|URI}} to reflect the address of the server, like:<br />
<br />
{{hc|/etc/openldap/ldap.conf|2=<br />
BASE dc=example,dc=com<br />
URI ldap://localhost<br />
}}<br />
<br />
If you decide to use SSL:<br />
<br />
* The protocol (ldap or ldaps) in the {{ic|URI}} entry has to conform with the slapd configuration <br />
* If you decide to use self-signed certificates, add a {{ic|TLS_REQCERT allow}} line to {{ic|ldap.conf}}<br />
<br />
=== Test your new OpenLDAP installation ===<br />
<br />
This is easy, just run the command below:<br />
$ ldapsearch -x '(objectclass=*)'<br />
<br />
Or authenticating as the rootdn (replacing {{ic|-x}} by {{ic|-D <user> -W}}), using the example configuration we had above:<br />
$ ldapsearch -D "cn=root,dc=example,dc=com" -W '(objectclass=*)'<br />
<br />
Now you should see some information about your database.<br />
<br />
=== OpenLDAP over TLS ===<br />
{{Note|[http://www.openldap.org/doc/admin24/ upstream documentation] is much more useful/complete than this section}}<br />
<br />
If you access the OpenLDAP server over the network and especially if you have sensitive data stored on the server you run the risk of someone sniffing your data which is sent clear-text. The next part will guide you on how to setup an SSL connection between the LDAP server and the client so the data will be sent encrypted.<br />
<br />
In order to use TLS, you must have a certificate. For testing purposes, a ''self-signed'' certificate will suffice. To learn more about certificates, see [[OpenSSL]].<br />
<br />
{{Warning|OpenLDAP cannot use a certificate that has a password associated to it.}}<br />
<br />
==== Create a self-signed certificate ====<br />
To create a ''self-signed'' certificate, type the following:<br />
$ openssl req -new -x509 -nodes -out slapdcert.pem -keyout slapdkey.pem -days 365<br />
<br />
You will be prompted for information about your LDAP server. Much of the information can be left blank. The most important information is the common name. This must be set to the DNS name of your LDAP server. If your LDAP server's IP address resolves to example.org but its server certificate shows a CN of bad.example.org, LDAP clients will reject the certificate and will be unable to negotiate TLS connections (apparently the results are wholly unpredictable).<br />
<br />
Now that the certificate files have been created copy them to {{ic|/etc/openldap/ssl/}} (create this directory if it does not exist) and secure them. <br />
{{ic|slapdcert.pem}} must be world readable because it contains the public key. {{ic|slapdkey.pem}} on the other hand should only be readable for the ldap user for security reasons:<br />
# mv slapdcert.pem slapdkey.pem /etc/openldap/ssl/<br />
# chmod -R 755 /etc/openldap/ssl/<br />
# chmod 400 /etc/openldap/ssl/slapdkey.pem<br />
# chmod 444 /etc/openldap/ssl/slapdcert.pem<br />
# chown ldap /etc/openldap/ssl/slapdkey.pem<br />
<br />
==== Configure slapd for SSL ====<br />
Edit the daemon configuration file ({{ic|/etc/openldap/slapd.conf}}) to tell LDAP where the certificate files reside by adding the following lines:<br />
{{bc|<br />
# Certificate/SSL Section<br />
TLSCipherSuite HIGH:MEDIUM:-SSLv2:-SSLv3<br />
TLSCertificateFile /etc/openldap/ssl/slapdcert.pem<br />
TLSCertificateKeyFile /etc/openldap/ssl/slapdkey.pem<br />
}}<br />
<br />
The TLSCipherSuite specifies a list of OpenSSL ciphers from which slapd will choose when negotiating TLS connections, in decreasing order of preference. In addition to those specific ciphers, you can use any of the wildcards supported by OpenSSL. '''NOTE:''' HIGH, MEDIUM, and +SSLv2 are all wildcards. <br />
<br />
{{Note|To see which ciphers are supported by your local OpenSSL installation, type the following: {{ic|openssl ciphers -v ALL}} }}<br />
<br />
Regenerate the configuration directory:<br />
# rm -rf /etc/openldap/slapd.d/* # erase old config settings<br />
# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/ # generate new config directory from config file<br />
# chown -R ldap:ldap /etc/openldap/slapd.d # Change ownership recursively to ldap on the config directory<br />
<br />
==== Start slapd with SSL ====<br />
You will have to edit {{ic|slapd.service}} to change to protocol slapd listens on.<br />
<br />
First, disable {{ic|slapd.service}} if it is enabled.<br />
<br />
Then, copy the stock service to {{ic|/etc/systemd/system/}}:<br />
# cp /usr/lib/systemd/system/slapd.service /etc/systemd/system/<br />
<br />
Edit it, and add change {{ic|ExecStart}} to:<br />
{{hc|/etc/systemd/system/slapd.service|<nowiki><br />
ExecStart=/usr/bin/slapd -u ldap -g ldap -h "ldaps:///"</nowiki>}}<br />
<br />
Localhost connections do not need to use SSL. So, if you want to access the server locally you should change the {{ic|ExecStart}} line to:<br />
ExecStart=/usr/bin/slapd -u ldap -g ldap -h "ldap://127.0.0.1 ldaps:///"<br />
<br />
Then reenable and start it:<br />
# systemctl daemon-reload<br />
# systemctl restart slapd.service<br />
<br />
If {{ic|slapd}} started successfully you can enable it.<br />
<br />
{{Note|If you created a self-signed certificate above, be sure to add {{ic|TLS_REQCERT allow}} to {{ic|/etc/openldap/ldap.conf}} on the client, or it will not be able connect to the server.}}<br />
<br />
== Next Steps ==<br />
<br />
You now have a basic LDAP installation. The next step is to design your directory. The design is heavily dependent on what you are using it for. If you are new to LDAP, consider starting with a directory design recommended by the specific client services that will use the directory (PAM, [[Postfix]], etc).<br />
<br />
A directory for system authentication is the [[LDAP Authentication]] article.<br />
<br />
A nice web frontend is [[phpLDAPadmin]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== Client Authentication Checking ===<br />
If you cannot connect to your server for non-secure authentication<br />
<br />
$ ldapsearch -x -H ldap://ldaservername:389 -D cn=Manager,dc=example,dc=exampledomain<br />
<br />
and for TLS secured authentication with:<br />
<br />
$ ldapsearch -x -H ldaps://ldaservername:636 -D cn=Manager,dc=example,dc=exampledomain<br />
<br />
=== LDAP Server Stops Suddenly ===<br />
<br />
If you notice that slapd seems to start but then stops, try running:<br />
<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/*<br />
<br />
to allow slapd write access to its data directory as the user "ldap".<br />
<br />
== See Also ==<br />
* [http://www.openldap.org/doc/admin24/ Official OpenLDAP Software 2.4 Administrator's Guide]<br />
* [[phpLDAPadmin]] is a web interface tool in the style of phpMyAdmin.<br />
* [[LDAP Authentication]]<br />
* {{AUR|apachedirectorystudio}} from the [[Arch User Repository]] is an Eclipse-based LDAP viewer. Works perfect with OpenLDAP installations.</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=OpenLDAP&diff=370357OpenLDAP2015-04-20T17:29:48Z<p>Wolfdogg: /* The server */ added samba support config include</p>
<hr />
<div>[[Category:Networking]]<br />
[[ja:openLDAP]]<br />
[[ru:openLDAP]]<br />
[[zh-cn:OpenLDAP]]<br />
{{Related articles start}}<br />
{{Related|LDAP Authentication}}<br />
{{Related|LDAP Hosts}}<br />
{{Related articles end}}<br />
<br />
OpenLDAP is an open-source implementation of the LDAP protocol. An LDAP server basically is a non-relational database which is optimised for accessing, but not writing, data. It is mainly used as an address book (for e.g. email clients) or authentication backend to various services (such as Samba, where it is used to emulate a domain controller, or [[LDAP Authentication|Linux system authentication]], where it replaces {{ic|/etc/passwd}}) and basically holds the user data.<br />
<br />
Commands related to OpenLDAP that begin with {{ic|ldap}} (like {{ic|ldapsearch}}) are client-side utilities, while commands that begin with {{ic|slap}} (like {{ic|slapcat}}) are server-side.<br />
<br />
Directory services are an enormous topic. Configuration can therefore be complex. This page is a starting point for a basic OpenLDAP installation and a sanity check. If you are totally new to those concepts, [http://www.brennan.id.au/20-Shared_Address_Book_LDAP.html this] is an good introduction that is easy to understand and that will get you started, even if you are new to everything LDAP.<br />
<br />
== Installation ==<br />
<br />
OpenLDAP contains both a LDAP server and client. Install it with the package {{Pkg|openldap}}, available in the [[official repositories]].<br />
<br />
== Configuration ==<br />
<br />
=== The server ===<br />
<br />
{{Note|If you already have an OpenLDAP database on your machine, remove it by deleting everything inside {{ic|/var/lib/openldap/openldap-data/}}.}}<br />
<br />
The server configuration file is located at {{ic|/etc/openldap/slapd.conf}}.<br />
<br />
Edit the suffix and rootdn. The suffix typically is your domain name but it does not have to be. It depends on how you use your directory. We will use ''example'' for the domain name, and ''com'' for the tld. The rootdn is your LDAP administrator's name (we will use ''root'' here).<br />
{{bc|<nowiki><br />
suffix "dc=example,dc=com"<br />
rootdn "cn=root,dc=example,dc=com"<br />
</nowiki>}}<br />
<br />
Now we delete the default root password and create a strong one:<br />
# sed -i "/rootpw/ d" /etc/openldap/slapd.conf #find the line with rootpw and delete it<br />
# echo "rootpw $(slappasswd)" >> /etc/openldap/slapd.conf #add a line which includes the hashed password output from slappasswd<br />
<br />
You will likely want to add some typically used [http://www.openldap.org/doc/admin24/schema.html schemas] to the top of {{ic|slapd.conf}}:<br />
[code]cp /usr/share/doc/samba/examples/LDAP/samba.schema /etc/openldap/schema<br />
{{bc|<br />
include /etc/openldap/schema/cosine.schema<br />
include /etc/openldap/schema/inetorgperson.schema<br />
include /etc/openldap/schema/nis.schema<br />
include /etc/openldap/schema/samba.schema<br />
}}<br />
<br />
You will likely want to add some typically used [http://www.openldap.org/doc/admin24/tuning.html#Indexes indexes] to the bottom of {{ic|slapd.conf}}:<br />
{{bc|<br />
index uid pres,eq<br />
index mail pres,sub,eq<br />
index cn pres,sub,eq<br />
index sn pres,sub,eq<br />
index dc eq<br />
}}<br />
<br />
Now prepare the database directory. You will need to copy the default config file and set the proper ownership:<br />
# cp /etc/openldap/DB_CONFIG.example /var/lib/openldap/openldap-data/DB_CONFIG<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/DB_CONFIG<br />
<br />
{{Note|With OpenLDAP 2.4 the configuration of {{ic|slapd.conf}} is deprecated. From this version on all configuration settings are stored in {{ic|/etc/openldap/slapd.d/}}.}}<br />
<br />
To store the recent changes in {{ic|slapd.conf}} to the new {{ic|/etc/openldap/slapd.d/}} configuration settings, we have to delete the old configuration files first, do this every time you change the configuration:<br />
<br />
# rm -rf /etc/openldap/slapd.d/*<br />
<br />
<br />
(if you do not have a database yet, you might need to create one by starting and stopping the {{ic|slapd.service}} [[systemd#Using units|using systemd]] )<br />
<br />
Then we generate the new configuration with:<br />
<br />
# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/<br />
<br />
The above command has to be run every time you change {{ic|slapd.conf}}. Check if everything succeeded. Ignore message "bdb_monitor_db_open: monitoring disabled; configure monitor database to enable". <br />
<br />
Change ownership recursively on the new files and directory in /etc/openldap/slapd.d:<br />
<br />
# chown -R ldap:ldap /etc/openldap/slapd.d<br />
<br />
{{note|Index the directory after you populate it. You should stop slapd before doing this.<br />
# slapindex<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/*<br />
}}<br />
<br />
Finally, start the slapd daemon with {{ic|slapd.service}} using systemd.<br />
<br />
=== The client ===<br />
The client config file is located at {{ic|/etc/openldap/ldap.conf}}. <br />
<br />
It is quite simple: you will only have to alter {{ic|BASE}} to reflect the suffix of the server, and {{ic|URI}} to reflect the address of the server, like:<br />
<br />
{{hc|/etc/openldap/ldap.conf|2=<br />
BASE dc=example,dc=com<br />
URI ldap://localhost<br />
}}<br />
<br />
If you decide to use SSL:<br />
<br />
* The protocol (ldap or ldaps) in the {{ic|URI}} entry has to conform with the slapd configuration <br />
* If you decide to use self-signed certificates, add a {{ic|TLS_REQCERT allow}} line to {{ic|ldap.conf}}<br />
<br />
=== Test your new OpenLDAP installation ===<br />
<br />
This is easy, just run the command below:<br />
$ ldapsearch -x '(objectclass=*)'<br />
<br />
Or authenticating as the rootdn (replacing {{ic|-x}} by {{ic|-D <user> -W}}), using the example configuration we had above:<br />
$ ldapsearch -D "cn=root,dc=example,dc=com" -W '(objectclass=*)'<br />
<br />
Now you should see some information about your database.<br />
<br />
=== OpenLDAP over TLS ===<br />
{{Note|[http://www.openldap.org/doc/admin24/ upstream documentation] is much more useful/complete than this section}}<br />
<br />
If you access the OpenLDAP server over the network and especially if you have sensitive data stored on the server you run the risk of someone sniffing your data which is sent clear-text. The next part will guide you on how to setup an SSL connection between the LDAP server and the client so the data will be sent encrypted.<br />
<br />
In order to use TLS, you must have a certificate. For testing purposes, a ''self-signed'' certificate will suffice. To learn more about certificates, see [[OpenSSL]].<br />
<br />
{{Warning|OpenLDAP cannot use a certificate that has a password associated to it.}}<br />
<br />
==== Create a self-signed certificate ====<br />
To create a ''self-signed'' certificate, type the following:<br />
$ openssl req -new -x509 -nodes -out slapdcert.pem -keyout slapdkey.pem -days 365<br />
<br />
You will be prompted for information about your LDAP server. Much of the information can be left blank. The most important information is the common name. This must be set to the DNS name of your LDAP server. If your LDAP server's IP address resolves to example.org but its server certificate shows a CN of bad.example.org, LDAP clients will reject the certificate and will be unable to negotiate TLS connections (apparently the results are wholly unpredictable).<br />
<br />
Now that the certificate files have been created copy them to {{ic|/etc/openldap/ssl/}} (create this directory if it does not exist) and secure them. <br />
{{ic|slapdcert.pem}} must be world readable because it contains the public key. {{ic|slapdkey.pem}} on the other hand should only be readable for the ldap user for security reasons:<br />
# mv slapdcert.pem slapdkey.pem /etc/openldap/ssl/<br />
# chmod -R 755 /etc/openldap/ssl/<br />
# chmod 400 /etc/openldap/ssl/slapdkey.pem<br />
# chmod 444 /etc/openldap/ssl/slapdcert.pem<br />
# chown ldap /etc/openldap/ssl/slapdkey.pem<br />
<br />
==== Configure slapd for SSL ====<br />
Edit the daemon configuration file ({{ic|/etc/openldap/slapd.conf}}) to tell LDAP where the certificate files reside by adding the following lines:<br />
{{bc|<br />
# Certificate/SSL Section<br />
TLSCipherSuite HIGH:MEDIUM:-SSLv2:-SSLv3<br />
TLSCertificateFile /etc/openldap/ssl/slapdcert.pem<br />
TLSCertificateKeyFile /etc/openldap/ssl/slapdkey.pem<br />
}}<br />
<br />
The TLSCipherSuite specifies a list of OpenSSL ciphers from which slapd will choose when negotiating TLS connections, in decreasing order of preference. In addition to those specific ciphers, you can use any of the wildcards supported by OpenSSL. '''NOTE:''' HIGH, MEDIUM, and +SSLv2 are all wildcards. <br />
<br />
{{Note|To see which ciphers are supported by your local OpenSSL installation, type the following: {{ic|openssl ciphers -v ALL}} }}<br />
<br />
Regenerate the configuration directory:<br />
# rm -rf /etc/openldap/slapd.d/* # erase old config settings<br />
# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/ # generate new config directory from config file<br />
# chown -R ldap:ldap /etc/openldap/slapd.d # Change ownership recursively to ldap on the config directory<br />
<br />
==== Start slapd with SSL ====<br />
You will have to edit {{ic|slapd.service}} to change to protocol slapd listens on.<br />
<br />
First, disable {{ic|slapd.service}} if it is enabled.<br />
<br />
Then, copy the stock service to {{ic|/etc/systemd/system/}}:<br />
# cp /usr/lib/systemd/system/slapd.service /etc/systemd/system/<br />
<br />
Edit it, and add change {{ic|ExecStart}} to:<br />
{{hc|/etc/systemd/system/slapd.service|<nowiki><br />
ExecStart=/usr/bin/slapd -u ldap -g ldap -h "ldaps:///"</nowiki>}}<br />
<br />
Localhost connections do not need to use SSL. So, if you want to access the server locally you should change the {{ic|ExecStart}} line to:<br />
ExecStart=/usr/bin/slapd -u ldap -g ldap -h "ldap://127.0.0.1 ldaps:///"<br />
<br />
Then reenable and start it:<br />
# systemctl daemon-reload<br />
# systemctl restart slapd.service<br />
<br />
If {{ic|slapd}} started successfully you can enable it.<br />
<br />
{{Note|If you created a self-signed certificate above, be sure to add {{ic|TLS_REQCERT allow}} to {{ic|/etc/openldap/ldap.conf}} on the client, or it will not be able connect to the server.}}<br />
<br />
== Next Steps ==<br />
<br />
You now have a basic LDAP installation. The next step is to design your directory. The design is heavily dependent on what you are using it for. If you are new to LDAP, consider starting with a directory design recommended by the specific client services that will use the directory (PAM, [[Postfix]], etc).<br />
<br />
A directory for system authentication is the [[LDAP Authentication]] article.<br />
<br />
A nice web frontend is [[phpLDAPadmin]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== Client Authentication Checking ===<br />
If you cannot connect to your server for non-secure authentication<br />
<br />
$ ldapsearch -x -H ldap://ldaservername:389 -D cn=Manager,dc=example,dc=exampledomain<br />
<br />
and for TLS secured authentication with:<br />
<br />
$ ldapsearch -x -H ldaps://ldaservername:636 -D cn=Manager,dc=example,dc=exampledomain<br />
<br />
=== LDAP Server Stops Suddenly ===<br />
<br />
If you notice that slapd seems to start but then stops, try running:<br />
<br />
# chown ldap:ldap /var/lib/openldap/openldap-data/*<br />
<br />
to allow slapd write access to its data directory as the user "ldap".<br />
<br />
== See Also ==<br />
* [http://www.openldap.org/doc/admin24/ Official OpenLDAP Software 2.4 Administrator's Guide]<br />
* [[phpLDAPadmin]] is a web interface tool in the style of phpMyAdmin.<br />
* [[LDAP Authentication]]<br />
* {{AUR|apachedirectorystudio}} from the [[Arch User Repository]] is an Eclipse-based LDAP viewer. Works perfect with OpenLDAP installations.</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Jenkins&diff=369832Jenkins2015-04-15T22:51:46Z<p>Wolfdogg: /* Install */ updated instructs.</p>
<hr />
<div>[[Category:Package development]]<br />
[[Category:Web Server]]<br />
<br />
= Jenkins =<br />
<br />
== about ==<br />
What is Jenkins? Jenkins is a continuous integration server, however I'm not the best to explain exactly what that is.. Basically it is a Java application that helps you manage your web applications. Note your apps DONT have to contain any JAVA themselves, so can you can benefit from it for php applciations, node, etc.. Here is the Jenkins website, which can better explain it's uses. https://jenkins-ci.org/<br />
<br />
== Install ==<br />
$ su<br />
# pacman -Syu jenkins<br />
# exit<br />
$<br />
<br />
== configuring == <br />
<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
=== create automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
<br />
== running == <br />
<br />
If you have added the startjenkins file to a directory that is in your path, all you need to do to run it, is the following command<br />
$startjenkins<br />
Otherwise you will need to run it directly from its actual location<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
== accessing ==<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 22:46, 15 April 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Jenkins&diff=369831Jenkins2015-04-15T22:51:22Z<p>Wolfdogg: /* Install */</p>
<hr />
<div>[[Category:Package development]]<br />
[[Category:Web Server]]<br />
<br />
= Jenkins =<br />
<br />
== about ==<br />
What is Jenkins? Jenkins is a continuous integration server, however I'm not the best to explain exactly what that is.. Basically it is a Java application that helps you manage your web applications. Note your apps DONT have to contain any JAVA themselves, so can you can benefit from it for php applciations, node, etc.. Here is the Jenkins website, which can better explain it's uses. https://jenkins-ci.org/<br />
<br />
== Install ==<br />
$ su<br />
# pacman -Syu jenkins<br />
# exit<br />
$ ...<br />
<br />
== configuring == <br />
<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
=== create automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
<br />
== running == <br />
<br />
If you have added the startjenkins file to a directory that is in your path, all you need to do to run it, is the following command<br />
$startjenkins<br />
Otherwise you will need to run it directly from its actual location<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
== accessing ==<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 22:46, 15 April 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Jenkins&diff=369830Jenkins2015-04-15T22:51:00Z<p>Wolfdogg: /* Install */</p>
<hr />
<div>[[Category:Package development]]<br />
[[Category:Web Server]]<br />
<br />
= Jenkins =<br />
<br />
== about ==<br />
What is Jenkins? Jenkins is a continuous integration server, however I'm not the best to explain exactly what that is.. Basically it is a Java application that helps you manage your web applications. Note your apps DONT have to contain any JAVA themselves, so can you can benefit from it for php applciations, node, etc.. Here is the Jenkins website, which can better explain it's uses. https://jenkins-ci.org/<br />
<br />
== Install ==<br />
su<br />
# pacman -Syu jenkins<br />
exit<br />
<br />
== configuring == <br />
<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
=== create automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
<br />
== running == <br />
<br />
If you have added the startjenkins file to a directory that is in your path, all you need to do to run it, is the following command<br />
$startjenkins<br />
Otherwise you will need to run it directly from its actual location<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
== accessing ==<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 22:46, 15 April 2015 (UTC)</div>Wolfdogghttps://wiki.archlinux.org/index.php?title=Jenkins&diff=369829Jenkins2015-04-15T22:50:34Z<p>Wolfdogg: /* Install */</p>
<hr />
<div>[[Category:Package development]]<br />
[[Category:Web Server]]<br />
<br />
= Jenkins =<br />
<br />
== about ==<br />
What is Jenkins? Jenkins is a continuous integration server, however I'm not the best to explain exactly what that is.. Basically it is a Java application that helps you manage your web applications. Note your apps DONT have to contain any JAVA themselves, so can you can benefit from it for php applciations, node, etc.. Here is the Jenkins website, which can better explain it's uses. https://jenkins-ci.org/<br />
<br />
== Install ==<br />
# pacman -Syu jenkins<br />
<br />
== configuring == <br />
<br />
The conf file is located in /etc/conf.d/jenkins, open this file and look it over.<br />
JAVA=/usr/bin/java<br />
JAVA_ARGS=-Xmx512m<br />
JAVA_OPTS=<br />
JENKINS_USER=jenkins<br />
JENKINS_HOME=/var/lib/jenkins<br />
JENKINS_WAR=/usr/share/java/jenkins/jenkins.war<br />
JENKINS_WEBROOT=--webroot=/var/cache/jenkins<br />
JENKINS_PORT=--httpPort=8090<br />
JENKINS_AJPPORT=--ajp13Port=-1<br />
JENKINS_OPTS=<br />
JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"<br />
Notice the location of the war file. CD to this directory, then run jenkins from there.<br />
<br />
=== create automated script to start jenkins ===<br />
cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)<br />
open a new file<br />
vim startjenkins<br />
add the following to the file<br />
#!/bin/bash<br />
echo<br />
echo starting jenkins now<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
close vim<br />
vim w:q<br />
change permissions<br />
chown users:<yourusername> startjenkins<br />
chmod 655 jenkins<br />
<br />
== running == <br />
<br />
If you have added the startjenkins file to a directory that is in your path, all you need to do to run it, is the following command<br />
$startjenkins<br />
Otherwise you will need to run it directly from its actual location<br />
java -jar /usr/share/java/jenkins/jenkins.war<br />
<br />
cd /usr/share/java/jenkins/jenkins.war<br />
java -jar jenkins.war<br />
<br />
== accessing ==<br />
You can now log into your jenkins<br />
http://localhost:8080<br />
--[[User:Wolfdogg|Wolfdogg]] ([[User talk:Wolfdogg|talk]]) 22:46, 15 April 2015 (UTC)</div>Wolfdogg