User talk:Wolfdogg

From ArchWiki
Latest comment: 25 June 2012 by Wolfdogg in topic Help Needed

Samba from OpenLDAP on Arch

samba specific setup

download smbldap-tools from the aur https://aur.archlinux.org/packages/sm/smbldap-tools/smbldap-tools.tar.gz make pkg, get any dependencies you need to get this process completed

Jenkins

The conf file is located in /etc/conf.d/jenkins, open this file and look it over.

 JAVA=/usr/bin/java
 JAVA_ARGS=-Xmx512m
 JAVA_OPTS=
 JENKINS_USER=jenkins
 JENKINS_HOME=/var/lib/jenkins
 JENKINS_WAR=/usr/share/java/jenkins/jenkins.war
 JENKINS_WEBROOT=--webroot=/var/cache/jenkins
 JENKINS_PORT=--httpPort=8090
 JENKINS_AJPPORT=--ajp13Port=-1
 JENKINS_OPTS=
 JENKINS_COMMAND_LINE="$JAVA $JAVA_ARGS $JAVA_OPTS -jar $JENKINS_WAR $JENKINS_WEBROOT $JENKINS_PORT $JENKINS_AJPPORT $JENKINS_OPTS"

Notice the location of the war file. CD to this directory, then run jenkins from there.

   cd /usr/share/java/jenkins/jenkins.war
   java -jar jenkins.war

You can now log into your jenkins http://localhost:8080

create an automated script to start jenkins

   cd /usr/local/bin #(or share, depending on whats already in your path, you can run $ echo $PATH to find this info)

open a new file

   vim startjenkins

add the following to the file

   #!/bin/bash
   echo
   echo starting jenkins now
   java -jar /usr/share/java/jenkins/jenkins.war

close vim

   vim w:q

change permissions

   chown users:<yourusername> startjenkins
   chmod 655 jenkins

run it this way now

   $startjenkins

Automount Samba Shares on boot

Using systemd, make an entry in fstab for the network drive use credentials option use noauto option use nofail option use x-systemd.automount option set timeout

New ZFS Setup

[root@falcon wolfdogg]# zpool create -f -m /san san ata-ST2000DM001-9YN164_W1E07E0G ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332
[root@falcon wolfdogg]# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
san   5.44T   604K  5.44T     0%  1.00x  ONLINE  -
[root@falcon wolfdogg]# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
san    544K  5.35T   136K  /san
[root@falcon wolfdogg]# zpool status
 pool: san
state: ONLINE
 scan: none requested
config:
       NAME                                        STATE     READ WRITE CKSUM
       san                                         ONLINE       0     0     0
         ata-ST2000DM001-9YN164_W1E07E0G           ONLINE       0     0     0
         ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346  ONLINE       0     0     0
         ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332  ONLINE       0     0     0
errors: No known data errors

ZFS add to linear span by id

$ cd disk/by-id/ [root@falcon by-id]$ l total 0 drwxr-xr-x 2 root root 560 Oct 26 02:45 . drwxr-xr-x 8 root root 160 Oct 26 02:45 .. lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-1E6164_Z1E6EQ15 -> ../../sde lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST2000DM001-9YN164_W1E07E0G-part9 -> ../../sdc9 lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K -> ../../sda lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sda3 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sda4 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-ST3250823AS_5ND0MS6K-part5 -> ../../sda5 lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346 -> ../../sdb lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332 -> ../../sdd lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c50045406de0 -> ../../sdc lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x5000c50045406de0-part9 -> ../../sdc9 lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x5000c500658cc2b2 -> ../../sde lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee25e8d29db -> ../../sdd lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee25e8d29db-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 Oct 26 02:45 wwn-0x50014ee6034e422c -> ../../sdb lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Oct 26 02:45 wwn-0x50014ee6034e422c-part9 -> ../../sdb9 [root@falcon by-id]$ zpool status

 pool: san
state: ONLINE

status: Some supported features are not enabled on the pool. The pool can

       still be used, but some features are unavailable.

action: Enable all features using 'zpool upgrade'. Once this is done,

       the pool may no longer be accessible by software that does not support
       the features. See zpool-features(5) for details.
 scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016

config:

       NAME                                        STATE     READ WRITE CKSUM
       san                                         ONLINE       0     0     0
         ata-ST2000DM001-9YN164_W1E07E0G           ONLINE       0     0     0
         ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346  ONLINE       0     0     0
         ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332  ONLINE       0     0     0

errors: No known data errors [root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -Pn would update 'san' to the following configuration:

       san
         /dev/disk/by-id/ata-ST2000DM001-9YN164_W1E07E0G-part1
         /dev/disk/by-id/ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346-part1
         /dev/disk/by-id/ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332-part1
         /dev/disk/by-id/wwn-0x5000c500658cc2b2

[root@falcon by-id]$ zpool add san wwn-0x5000c500658cc2b2 -P [root@falcon by-id]$ zpool status

 pool: san
state: ONLINE

status: Some supported features are not enabled on the pool. The pool can

       still be used, but some features are unavailable.

action: Enable all features using 'zpool upgrade'. Once this is done,

       the pool may no longer be accessible by software that does not support
       the features. See zpool-features(5) for details.
 scan: scrub repaired 0 in 15h41m with 0 errors on Sat Oct 22 11:11:13 2016

config:

       NAME                                        STATE     READ WRITE CKSUM
       san                                         ONLINE       0     0     0
         ata-ST2000DM001-9YN164_W1E07E0G           ONLINE       0     0     0
         ata-WDC_WD20EZRX-00DC0B0_WD-WMC1T3458346  ONLINE       0     0     0
         ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0428332  ONLINE       0     0     0
         wwn-0x5000c500658cc2b2                    ONLINE       0     0     0

errors: No known data errors


Git three stage web deployment

Im now using git to handle my web repos.

Currently my server environment is not hosted live. The intention is to have a secure development environment containing the sandbox and stage environments. Once i fully explore this setup i will add the 3rd step to push stage to a live shared host using the same methods here.


This set up uses /srv/http/sandbox for the development directory. If you insist on developing out of /home/<username>/public_html then you can either substitute all mention of sandbox in this wiki with it, or you can just turn your public_html folder into a symlink leading to /srv/http/sandbox after completing these exercises.


The following features detail the environment we are setting up using this wiki:

  • Central location for git repositories (--separate-git-dir) /srv/http/repos/git
  • Development sandbox /srv/http/sandbox
  • Staging ground for final pre-live q/a testing /srv/http/stage


Workflow:

  • clone your repo into your sandbox area, work on your files
  • commit changes from sandbox when each implementation or task is complete, then push those changes to the central sandbox repo (/srv/http/repo/git/website.com) where you will then be able to view them on dev(http://dev.website.com)
  • At some point, when ready to stage, a "push" from the central sandbox repo to a separate central stage repo (/srv/http/git/repo/website.com.stage) will be made
  • Hooks located in central stage repo will auto deploy to stage(/srv/http/stage/website) when push is ran . (This deploys only the web files, and NOT the hidden .git repo folder which would otherwise make your site vulnerable, hence the reason for this wiki. Otherwise it would be as easy as going to stage and cloning for example)

Apache adjustments

set up a new system for dev and stage environments

Edit vhost

add or edit /etc/httpd/conf/extras/httpd-vhost.conf file to something similar

# default route
<VirtualHost *:80>
       ServerName <hostname>
       ServerAlias <hostname>
       VirtualDocumentRoot "/srv/http"
</VirtualHost>
#sandbox (dev) root route
<Virtualhost *:80>
       ServerName dev
       DocumentRoot "/srv/http/sandbox"
</Virtualhost>
#sandbox (dev) subdomains route
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol
<VirtualHost *:80>
       ServerName dev.sub.com
       ServerAlias dev.*
       VirtualDocumentRoot "/srv/http/sandbox/%2+"
</VirtualHost>
#stage root route
<Virtualhost *:80>
       ServerName stage
       DocumentRoot "/srv/http/stage"
</VirtualHost>
#stage subdomains route
# http://httpd.apache.org/docs/2.0/mod/mod_vhost_alias.html#interpol
<VirtualHost *:80>
       ServerName stage.sub.com
       ServerAlias stage.*
       VirtualDocumentRoot "/srv/http/stage/%2+"
</VirtualHost>

restart apache

systemctl restart httpd
#or if your still using initscripts
rc.d restart httpd

Create dirs

touch /srv/http/index.php
mkdir /srv/http/sandbox
touch /srv/http/sandbox/index.php
mkdir /srv/http/stage
touch /srv/http/stage/index.php

note, in order for this to work now you will need to add your websites with the same dir name as they will be when deployed, i.e. use dots when dots exists, e.g. if website is google.com the dirname must be google.com, so your dev site will end up being /srv/http/sandbox/google.com so that the vhost will be able to map to it using http://dev.google.com

Edit hosts files

Assuming your development machine is not the same as the server, you need to make sure to add an entry for each site, during creation of the site, to your hosts file.

i will use server ip 192.168.1.99 and hostname 'myserver' as example, and google.com as our website url example, use your own host name's and website url's in place of them.

  • linux

add the following lines to /etc/hosts

192.168.1.99  
192.168.1.99 dev                 myserver
192.168.1.99 stage               myserver 
192.168.1.99 dev.google.com      myserver
192.168.1.99 stage.google.com    myserver
  • windows

navigate to the following file logged in as admin, %WINDIR%/system32/drivers/etc/hosts

or run cmd as administrator, paste thie following line there

notepad %WINDIR%/system32/drivers/etc/hosts 

add your sites to the hosts file

192.168.1.99 dev
192.168.1.99 stage
192.168.1.99 dev.google.com
192.168.1.99 stage.google.com
#dev.anothersite.com
#stage.anothersite.com
#etc...
  • mac, you get the idea...

Git usage

Install git, and make the following changes

Set up Git

  • Create a central location for the git repos. (note, ill use a group called 'webteam' for example. This group separates what apache needs permission to from what a web developer needs permission to. each web developer is a member of this group)
$ su
$ mkdir -p /srv/http/repos/git
$ chown http:webteam /srv/http/* -R
$ chmod 775 /srv/http/* -R
  • sandbox push global configs

you will probably get an error stating that you cant push to the branch that you cloned out of, more research needs to be done on this, but it seems pretty straight forward. Set the following config variable to squelch this.

$ git config --global receive.denyCurrentBranch warn #you can set it to false instead of warn once your sure it doesnt cause any problems
  • stage push global configs

Git version 2.0+ will start using a new push.default standard called "simple". "simple" indicates that the push will refuse if the upstreams branch name is different than the local one, which in this case it is. (website.com vs. stage.website.com) If your version is less than 2.0, which to date isn't out yet, then we can make it forward compatible. Set the following config variable to squelch this.

$ git config --global push.default matching

Add or enroll a new site

Once your Git environment is setup all you have to do is start from this point each time you want to create a new website.

  • add a website (or see below if you are enrolling an existing site)
$ mkdir -p /srv/http/sandbox/<website.com> # -p incase you dont have a sandbox dir yet
  • alternatively enroll an existing website

To enroll an existing website, move it to the /srv/http/sandbox directory, being sure to rename it to conform to the actual address that it will be accessible via live (naming is important here, e.g. if your folder is named website-com, mv it to website.com. The .com portion is not mandatory if there is no domain name intended for it, just know that you will be accessing it in your browser exactly the same way as you name it, unless you muck around with the vhosts file to customize your own mapping strategy. )

$ mv /home/<username>/public_html/website-com /srv/http/sandbox/website.com
  • continue here once you relocated your website or added a new website to the sandbox
$ cd /srv/http/sandbox/website.com
$ git init --separate-git-dir=/srv/http/repos/git/<website>
$ git add -A
$ git commit -m 'comment here'

If your going to actively develop this site, then continue below to stagify the site. If your just archiving an old site for later development you can stop here, delete the site folder ( e.g./srv/http/sandbox/website.com) then "recreate the worktree", as detailed below, from the repo and stageify at a later time.

Stageify

  • create empty stage repo for this site (start here if you already have an existing site and repo substitute sandbox with your site location)
$ mkdir -p /srv/http/repos/git/website.com.stage
$ cd /srv/http/repos/git/website.com.stage
$ git init --bare
  • make sites stage dir and create the hook (while still in the stage repo)
$ mkdir /srv/http/stage/website.com
$ cat > hooks/post-receive
#!/bin/sh
GIT_WORK_TREE=/srv/http/stage/website.com git checkout -f

#press ctrl +d to save and exit cat

$ chmod +x hooks/post-receive
  • Define stage mirror and create a master branch tree (you MUST run both the following commands from the sandbox repo, the sandbox checkout NOR the stage repo will suffice)
$ cd /srv/http/repos/git/website.com #note, important that your in this exact path, as opposed to website.com.stage (if your sandbox .git dir is somewhere else then cd INTO it)
$ git remote add stage.website.com /srv/http/repos/git/website.com.stage #(note, stage.website.com is just a name, its what we will use to declare as teh repo name that we will call when we stage it)
$ git push stage.website.com +master:refs/heads/master
  • the push to stage will be run simply from here forward, from within your root code base.. (i.e. you dont need to be in the .git dir anymore)
$ git push stage.website.com

See workflow below for more examples on this.

  • In case of upstream branch error when you run the push, you can do the following.

Since we have globally configured the push.default to matching, an upstream branch error may occur. To squelch this error we need to indicate our upstream branch. Since we named out branch "stage.website.com" we issue the following command from our sandbox repo (/srv/http/repos/git/website.com)

$ cd /srv/http/repos/git/website.com
$ git push --set-upstream stage.website.com master

General work flow

Once you have your environment all setup, you can follow a general work flow pattern in order to take advantage of the full functionality.

New development process

  • If you plan to just archive your site, you may want to delete it from the sandbox once its archived to the git repository using the steps above, since its nicely compressed and packed away into your repo dir. If this is the case then go head and delete your sandbox web folder at this point so we can begin the flow from the furthest possible point. (note, only do this step if you have your website commited to the git repo as outlined above, i.e. your .git folder is NOT located inside your code base dir), otherwise continue "Work on files" below.
$ cd /srv/http
$ rm website.com -rf

You might want to delete your worktree after putting a project on the back-burner, and are not planning on working on it for long periods of time. Of you may have a crowded project directory and want to get your sanity back. If you deleted your worktree previously, or are restoring an old site from a repo for whatever reason, and your project is stored only inside the compressed repo, then you will need to re-create the worktree from the repo before working on your files. The following methods will guide you through doing this.

recreate worktree

Note, you wont need to do these steps if you still have your worktree in place and never deleted it.

method 1

(preferred method)

One way to get the worktree back is to extract the worktree out of the repo, then attach it to the repos branch

  • make the new site folder inside your sandbox
$ mkdir /srv/http/sandbox/website.com
  • switch to the new worktree directory and init for the first time
$ cd /srv/http/sandbox/website.com
$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init 
$ echo "gitdir: /srv/http/repos/git/website.com" > .git

alternatively you can run those last two commands on one line if its easier for you

$ git --git-dir=/srv/http/repos/git/website.com --work-tree=. init && echo "gitdir: /srv/http/repos/git/website.com" > .git
  • check the branch your on before pulling down the files
$ git branch
  • switch to the repo and extract the files to the new site directory you just created. Make sure your checking out the branch you intend, normally that will be "master" unless you have branched the repo. Replace the word "master" in the following command if you are working on a different branch.
$ cd /srv/http/repos/git/website.com
$ git archive master | tar -x -C /srv/http/sandbox/website.com

note, if you get an error in the above step

could not switch to '/some/dir: No Such File or directory

then you may be trying to get the files back to a directory other than what was used when the repo was made, otherwise continue below.

If you did receive this error then you need to edit the config file in the repo to point to the new directory. To do this, open the ./config file in your favorite editor and edit the worktree path to point to the new site directory that you created.

method 2 (alt method)

Another way to get the worktree back is to clone it out from the main repo, then delete the .git dir, then create the .git symlink. This way just seems a bit messy because one needs to delete the .git dir after unecessarily creating it, but as long as you replace it with the proper symlink it works fine.

  • clone the site to sandbox
$ cd /srv/http/sandbox
$ git clone /srv/http/repos/git/website.com
  • delete the hidden .git folder that was created inside this worktree
$ rm -rf .git
  • recreate the .git symlink
$ echo "gitdir: /srv/http/repos/git/website.com" > .git
  • checkout master to get things back on track (is this step even necessary?)
$ git checkout master

Work on files

  • Work on files
$ cd website.com 
$ git status

edit your files.....

commit the changes

$ git status #notice files are not staged for commit
$ git add -A # or just individual files (e.g. git add file1.php file2.php) etc..
$ git status # make sure the files you want to commit are listed to be committed
$ git commit -m 'this is what i did to these files'
$ git status # should come back clean
$ git log # you can see your commit comments here

visit http://dev.website.com to test the new functionality that was just committed

If all looks well, you need to push your changes to the sandbox branch master(repo) that you cloned from. This accomplishes two things

  • liberates your sandbox clone to be deleted at will
  • updates branch master so the files are waiting to be pushed(deployed) to stage
$ git push

Deploy to stage

Once your separate tasks have been individually commited, and you have thoroughly tested then in the sandbox environment, you can now deploy to the stage environment for final q/a before going live.

  • deploy to stage
$ cd /srv/http/repos/git/website.com
$ git push stage.website.com

now visit http://stage.website.com to do your final q/a

If any mistakes are spotted on stage, i.e. if everything is not working as expected, then your stage environment just paid for itself. Follow the below steps for damage control to revert changes back on stage

Damage control

If mistakes are spotted on stage, we need to revert our changes

  • @todo: commands for reverting
  • @todo: workflow


now that stage has been reset to the last-known-good, go back to the sandbox and edit

Why separate repository from web folder?

For me, it just makes sense. The beauty of it is you can bulk delete your all or any of your web files in the sandbox area when they start to get in the way. When your ready to work on that site again, you can just "recreate the worktree".

It may also help to have the repos outside of the web folders if you run any type of incremental backups, e.g. rsnapshot, or rsync. You can choose to backup only the repos, which are compressed, or to backup everything except sandbox for example. Mainly the fact that you can delete your sandbox websites without losing the repo is the main reason.

Credits

Thanks to

and Charles Bailey http://stackoverflow.com/questions/505467/can-i-store-the-git-folder-outside-the-files-i-want-tracked for getting me over the hump of getting the worktree properly out of a repo archive

ZFS-FUSE Implementation

Im trying to utilize the ZFS filesystem to create a flexible array of different sized drives that will serve as a data-backup volume shared across a samba network. ZFS can be taken advantage of here because of its flexibility using storage pools. ZFS also has some of the best features catering to data integrity. One drawback is some of this is done at the expense of file transfer rates. There are some things you can do to offset this however by having small backup drives, even usb stick or just about any other type of media serving as a cache drive. Speed will not be a consideration on this array since its main role is data integrity and safety.

The test array initially used here was an array of 3 dives, one 2Tb, and 2 500GB's. LVM was also explored to find ways to span drives into one large volume.

Installation

1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory.

went to AUR, copy and pasted the PKGBUILD contents into a new PKGBUILD file in my packages directory /home/wolfdogg/packages/packages/yaourt/PKGBUILD ran the makepkg -s from that folder. after battling a successtion of errors (invalid signatures in pacman, needed to re-init pacman-key, then had to create new group called sudo, and add my user to it, then edit the /etc/sudoers file accordingly, which seemed like the best way to get my user into sudoers, and eliminate the risk of accidentally doing something from root during the makepkg process) and finally, pacman -U yaourt

2)Then i followed the steps on the wiki on https://wiki.archlinux.org/index.php/ZFS_on_FUSE . For myself, the NFS portion got a bit confusing here https://wiki.archlinux.org/index.php/ZFS_on_FUSE#NFS_shares . Here in this section of the wiki [code]zfs set sharenfs=[/code] i got a bit side-tracked and will come back to it at a later time.

I found the manual for Solaris ZFS here [1] and it looks like it will provide all the info needed, especially this section of it [2]

Now would be a good time to read the manuals

#man zpool
#man zfs

Inventory drives

  • Hook up any drives that you want to add to your new filesystem.
  • Take an inventory of your current drives and partitios. Note any existing arrays and partitions. See md0 (mdadm array), zfs-kstat/pool (a zfs pool), and /pool/backup (a dataset) in the example below.
  • View the drive information
# blkid -o list -c /dev/null
  • List the partitions, and inspect your mounts
# lsblk -f
sdb      8:16   0   1.8T  0 disk
└─md0    9:0    0   2.7T  0 linear
sdc      8:32   0 465.8G  0 disk
└─md0    9:0    0   2.7T  0 linear
  • Use one of the following for each disk if you want to view partition tables and or reformat your drives:
  • If you have a MBR partition table
#fdisk -l
  • If you have GPT partition table
#gdisk -l /dev/sdb
  • View the mounts
# findmnt
TARGET                       SOURCE      FSTYPE      OPTIONS
/zfs-kstat                 kstat       fuse  rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other
└─/pool                     pool        fuse.zfs    rw,relatime,user_id=0,group_id=0,default_permissions,allow_other
  └─/pool/backup            pool/backup fuse.zfs    rw,relatime,user_id=0,group_id=0,default_permissions,allow_other

Prepare Drives

If you already know what your doing, you can use cgdisk, here is a nice link for cgdisk reference [3]. Otherwise, follow below to see a gdisk example walkthrough.

# gdisk
Type device or filename

We will use /dev/sdb for this example

# /dev/sdb
Command:

o will create a new partition table

# o
Proceed? 
This option will delete all partition and create a new protective MBR
# y
Command:

n will create a new partition

# n
Partition number

We should use 1 if its the first

# 1

Press enter for default first sector Press enter for default last sector

Hexcode: 
# L

L Shows all codes, choose a filesysetm hex code and type it in We will go with bf00 (Solaris root) for this example

# bf00

Now lets write partition to disk

COmmand:
# w
Final checks complete
Do you want to proceed
# y

Now your disk has a new clean partition table

Here are some steps to follow, so far its all i have.

Create the Zpool

Create the pool (notice im not using raidz here, you may want to if your drives are all the same size)

# zpool create <pool_name> /dev/sdb /dev/sdc /dev/sdd /dev/sde
  • Also note, you should create two pools if you have 4 or more drives so that the redundancy portion that zfs is so well known for will work properly.

"pool" is the name of the current test pool, and the mount point.

RAIDZ1, RAIDZ2

  • To create a RAIDZ if you have atleast 3 drives of the same size
  • 'raidz' is an alias for 'raidz1', or you can use 'raidz2' if you have an extra drive for an extra stripe for extra redundancy.)
# zpool create pool raidz /dev/sdb /dev/sdc /dev/sdd

Alternatively, create a raid span. Note, no raid type has been specified (disk, file, raidz, etc...)

# zpool create pool /dev/sdb /dev/sdc /dev/sdd

(see below to create an mdadm linear span instead (jbod))

if you do some testing and get stuck with a drive reporting unavailable after you hook it back up, sometimes i have to reboot which i probably just don't know enough about yet, but some commands that have been helpful

Get the list and size of the pool

# zpool list

To get the status

#zpool status

Create the ZFS Filesystem Hierarchy (dataset)

Lets create a dataset

# zfs create pool/backup
# zfs set mountpoint=/backup pool/backup

you can create a child file system inside the parent, the child file system is would automatically be inherited

# zfs create pool/backup/<computer_name>

Per the manual, ZFS automatically mounts the file system when the zfs mount -a command is invoked (without editing /dev/vfstab)

List and inspect

List and inspect your new zfs file system

# zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
pool          106K  2.68T    21K  /pool
pool/backup    21K  2.68T    21K  /pool/backup

Now lets look at all drives to see what this looks like. View mount status and disk size

# mount -l
# df
Filesystem      1K-blocks     Used  Available Use% Mounted on
rootfs           15087420  2356784   11962328  17% /
dev               1991168        0    1991168   0% /dev
run               2027072      316    2026756   1% /run
/dev/sda3        15087420  2356784   11962328  17% /
shm               2027072        0    2027072   0% /dev/shm
tmpfs             2027072       28    2027044   1% /tmp
/dev/sda4        95953460 84172112    6904016  93% /home
/dev/sda1           99550    19445      74886  21% /boot
pool           2873622443       23 2873622420   1% /pool
pool/backup    2873622441       21 2873622420   1% /pool/backup

To learn more about your coinfiguration options run the following command

# zfs get all | less

Destroy Array and Pools

WARNING - Backup all your data before beginning

If you need to break down your zpool or zfs datasets follow below.

Deactivate the array using mdadm RAID manager (unmount)

# mdadm -S /dev/md0

I chose to delete the line in /etc/mdadm.conf as well. not sure if it was needed but it seemed that there was still bits and pieces lying around that needed removing

# vim /etc/mdadm.conf

To destroy a dataset in the pool

#zfs destroy <filesystemvolume>

Or you can destroy the entire pool

#zpool destroy <pool>

If you cant totaly destroy the pool, or are trying to create a new pool with the same name its possible to trace clues about what process is using it using

#fuser /pool -a 

Then run top to find that process PID

#top

Or run lsof

# lsof | zfs-fuse | less

Go from there....

Now move on to #Linear_RAID_.28jbod.29_filesystem_using_mdadm_.2F_lvm or #RAIDZ_Filesystem_Configuration

Linear RAID (jbod) filesystem using mdadm / lvm

I decided to explore other options until i can figure out wether the ZFS span is set up properly using ZFS.

Create the array

Destroy existing zfs pool and break down the array where applicable. use this as a guide #Destroy_Array_and_Pools

Use mdadm to create the span

#  mdadm --create /dev/md0 --level=linear --raid-devices=3 /dev/sd[bcd]

Get the status detail using mdadm

# mdadm --misc --detail /dev/md0
 /dev/md0:
 Version : 1.2
 Creation Time : Mon Jun 25 13:52:31 2012
 Raid Level : linear
 Array Size : 2930286671 (2794.54 GiB 3000.61 GB)
 Raid Devices : 3
 Total Devices : 3
 Persistence : Superblock is persistent
 Update Time : Mon Jun 25 13:52:31 2012
 State : clean
 Active Devices : 3
 Working Devices : 3
 Failed Devices : 0
 Spare Devices : 0
 Rounding : 0K
 Name : falcon:0  (local to host falcon)
 UUID : 141f34d0:2b2c0973:4a6f070b:17b772ec
 Events : 0
 Number   Major   Minor   RaidDevice State
    0       8       16        0      active sync   /dev/sdb
    1       8       32        1      active sync   /dev/sdc
    2       8       48        2      active sync   /dev/sdd

Create volume and groups

Create the physical volume

# pvcreate /dev/md0

Display the volume

# pvdisplay
 "/dev/md0" is a new physical volume of "2.73 TiB"
 --- NEW Physical volume ---
 PV Name               /dev/md0
 VG Name
 PV Size               2.73 TiB
 Allocatable           NO
 PE Size               0
 Total PE              0
 Free PE               0
 Allocated PE          0
 PV UUID               1Hr3ay-L0mZ-33GD-4ZeM-EOtW-lkAz-M4REMJ

Create the volume group

# vgcreate VolGroupArray /dev/md0
 Volume group "VolGroupArray" successfully created
Display the volume groups
# vgdisplay
 --- Volume group ---
 VG Name               VolGroupArray
 System ID
 Format                lvm2
 Metadata Areas        1
 Metadata Sequence No  1
 VG Access             read/write
 VG Status             resizable
 MAX LV                0
 Cur LV                0
 Open LV               0
 Max PV                0
 Cur PV                1
 Act PV                1
 VG Size               2.73 TiB
 PE Size               4.00 MiB
 Total PE              715401
 Alloc PE / Size       0 / 0
 Free  PE / Size       715401 / 2.73 TiB
 VG UUID               2OAWpT-fO50-A7cW-jUjd-meQh-sWQa-7b55ZO

Create the logical volume, i used 2.725 because 2.73 failed

# lvcreate  VolGroupArray -L 2.725T -n backup

Display volume

# lvdisplay
 --- Logical volume ---
 LV Path                /dev/VolGroupArray/backup
 LV Name                backup
 VG Name                VolGroupArray
 LV UUID                Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW
 LV Write Access        read/write
 LV Creation host, time falcon, 2012-06-25 14:33:00 -0700
 LV Status              available
 # open                 0
 LV Size                2.73 TiB
 Current LE             714343
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:0
# lvdisplay
 --- Logical volume ---
 LV Path                /dev/VolGroupArray/backup
 LV Name                backup
 VG Name                VolGroupArray
 LV UUID                Evy09z-i3TS-PgaE-SrpD-C2Lq-snb2-XhvaVW
 LV Write Access        read/write
 LV Creation host, time falcon, 2012-06-25 14:33:00 -0700
 LV Status              available
 # open                 0
 LV Size                2.73 TiB
 Current LE             714343
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           253:0

Check the status

# cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd[2] sdc[1] sdb[0]
     2930286671 blocks super 1.2 0k rounding
unused devices: <none>

@TODO Need advice here - this portion needs testing, i got stuck here last time it tried this.

Recreating a ZFS storage pool

From my understanding one can add a drive, to a vdev (Virtual Device, or array set) without losing data, if its a linear span, but you cant remove a drive from the vdev without first copying the files and then deleting the pool then recreating it. This portion will walk you through completely tearing down the array, freeing up the drives, and recreating either a mdadm jbod linear span or a RAIDZ.

WARNING - Backup all your data before beginning

List the pools

get a list of the current pools

#zpool list

check status

# zpool status

list datasets

#zfs list

Share mount to network using samba

Configure samba to give user access to the share (good for network backups from any operating system, including windows backup)

  • add the following entry to /etc/samba/smb.conf and restart the samba daemon
[backup]
       comment = backup drive
       path = /pool/backup
       valid users = user1,user2
       read only = No
       create mask = 0765
       wide links = Yes
#rc.d restart samba

modify the permissions on pool/backup so they are accesible over the network, i decided to add 'user' group accessibility

# chown root:users backup
# chmod g+w backup

Alternatively

# chown root:root backup/
# chmod 755 backup/
# chown root:users backup/<dataset1>
# chmod 775 backup/<dataset1>

ZFS RAID Maintenance

@TODO - this section is not finished

Maintenance

  • To place a disk back online (see manual for this)
#zpool online
  • To replace a disk (see manual for this)
#zpool replace
  • If your pool goes down, i.e. one of your drives goes offline and not enough drives to complete replication you can try the following

- Check if the drive is initiated, you should see it in the list thats returned from running the following command

# blkid

If its not in the list: - Check all cable connections, the drive may not be mounted. A reboot may be needed. - Once you get the drive to appear then run following commands.

# zpool export <pool>
# zpool import <pool>
# zpool list

If you get an error message

"cannot import 'pool': one of more devices is currently unavailable. Destroy and re-create the pool from a backup source" try to export then import again.

@todo more information is needed before suggestions are made at this point.

Help Needed

If anybody has any suggestions, please chime in.

I havent figured out how to adress the size of the array yet. For example, when i hook up only the 2TB with the 500GB as a raidz, it will let me, and the size reports 1.36TB. When i destroyed the pool and rebuilt using all 3 drives (2TB,500GB,500GB) the size still reports 1.36TB

--Wolfdogg (talk) 00:18, 25 June 2012 (UTC)Reply[reply]

volnoti on KDE using alsamixer

The only program that worked for a HP laptop to control the volume correctly on KDE.

@todo this script is for alsamixer, and kde, more needs to be added to support other environments


create a script /usr/local/bin/sound.sh. This script will be called by another script placed in autostart.

insert the following contents in this script

#!/bin/bash

#this script is made for volnoti

# Configuration
STEP="2"    # Anything you like.
UNIT="dB"   # dB, %, etc.

# Set volume
SETVOL="/usr/bin/amixer -qc 0 set Master"
SETHEADPHONE="/usr/bin/amixer -qc 0 set Headphone"

case "$1" in
    "up")
          $SETVOL $STEP$UNIT+
          ;;
  "down")
          $SETVOL $STEP$UNIT-
          ;;
  "mute")
          $SETVOL toggle
          ;;
esac

# Get current volume and state
VOLUME=$(amixer get Master | grep 'Mono:' | cut -d ' ' -f 6 | sed -e 's/[^0-9]//g')
STATE=$(amixer get Master | grep 'Mono:' | grep -o "\[off\]")

# Show volume with volnoti
if -n $STATE ; then
  volnoti-show -m
else
  volnoti-show $VOLUME
  # If headphone is being used, mute is treated a bit differently when muted.  Make sure headphones follows master mute.
  amixer -c 0 set Headphone unmute
  amixer -c 0 set Speaker unmute
  amixer -qc 0 set Speaker 100%
fi

exit 0

Note, the broken line above is this, i guess the wiki mishandles brackets in code

if -n $STATE ; then

ok, so its creating a link here too, ok let me spell it out for you

if left bracket, left bracket, -n $STATE right bracket, right bracket; then


save this script then set permissions

#chown root:users /usr/local/bin/sound.sh
#chmod 755 /usr/local/bin/sound.sh

xbindkeys

install xbindkeys so that you can control volume with your keyboard

pacman -S xbindkeys

Logged in as user, create a xbindkeys config file ~/.xbindkeysrc with the following information for xbindkeys. This example sets the volume to the f7 (mute), f8 (vol down), and f9(vol up) keys

# increase volume
"sh /usr/local/bin/sound.sh up"
   m:0x0 + c:75
   F9

# Decrease volume
"sh /usr/local/bin/sound.sh down"
   m:0x0 + c:74
   F8

# Toggle mute
"sh /usr/local/bin/sound.sh mute"
   m:0x0 + c:73
   F7

#"amixer set Master playback 1+"

autostart

Logged in as user create the kde autostart script in ~/.kde4/Autostart. Name it anything e.g. start-volnoti.sh

#!/bin/bash
xbindkeys
volnoti

Save this script. Next time kde starts, it will run this script since its located in the autostart folder. It will call xbindkeys and volnoti, which will be waiting for keypresses to control alsamixer.

Enjoy.

smbclient media stream issues using dolphin when accessing windows shares

description

I was having access problems when trying to access files via smbclient, or samba as a client, accessing shares on a win7 file server, only when the user exists on the windows machine. It turns out the problem is possibly either a complicated mounting issue, or a deep bug in dolphin.

Before i delve into this, i also wanted to mention, the windows machine was set to not share files as "password protected sharing" http://www.sevenforums.com/tutorials/185429-password-protected-sharing-turn-off-windows-7-a.html. If you DO share with windows password protected sharing, when you acccess the samba shares through dolphin, it will issue a popup asking you for a password, even if you put in valid credentials you will still experience the issue, so dont spend too much time debugging by turning on and off password protected sharing as it didnt seem to help either way.

the bug reproduced

The way i was accessing the shares was through dolphin, by clicking on on the "Network" places, then by clicking into the "Samba" symlink. There i would see a list of workgroups, click into those to access my files, and so forth. I would find a directory i wanted to add to My "Places", i would right click on it and add to my places.

When i would access these media files through either the symlink i created in my Places, or through the existing Network symlink in my places, once i navigated to a video file, .avi in this case, no matter if i would choose VLC, or mplayer, the system would need to cache in full before it would play. This meant instead of the video starting right away, i would have to wait sometimes 10-15 minutes before the video would start, or i would get an error(VLC is unable to open the MRL 'smb://<server>/UserFiles/PublicArchive/movie.avi), depending on how the windows password protect was set, or if i was using vlc vs. mplayer, etc... Obviously something was wrong. Now i'm sure these symptoms ran much deeper than just video files, for example, i remember it happening to audio files, but i think it will probably happen to even text files or any file thats notible large enough to take more than 3 seconds to access across a network.

Important note, this only happens when im logged into KDE as a user that already exists on the windows system im trying to access the publicly shared files. If i were to log in to KDE as a user that doesn't exist the files would not have to cache, but instead would immediately start to stream as expected.

Something really buggy is going on at this point. So i thought maybe i would go into the KDE system settings > sharing and set the default username and password but this didn't help. i tried several things, including re-installing network driver, re-installing kde over the top of itself, userdel the user, clean out the home directory, nothing worked.

So once again, as i have done in the past to try to solve this issue, i decided i would try to manually mount the thing again and gain access differently that using the network icon in dolphin. This time i was following the wiki as usual, and i got to the part about manually mounting shares where i stumbled on one line that mentioned the /mnt directory that i have seen so many times before. This time the cards were lined up right i guess because i decided to click through "Root" ((in places) using dolphin, then navigated my way through /mnt/smbnet and onto my files this way where i discovered they play no problem (doesnt need to cache, starts streaming immediately).

the fix

It appears that the symlink "Network" in Dolphin 'Places' bar, at least the way its currently set up in my version of kde4, is there to make life miserable. Dont use it if you dont want to have to fully download the file before your system will have access to it. Don't access your network shares this way if they are on a samba share. Instead, navigate through Dolphins "Root" /mnt/smbnet/<your-workgroup>/<your-server>/<your-fileshares> instead, and right click on one of those folders and "Add To Places", then you will have proper a proper symlink on your "Places" to access through /mnt.

steps taken to repair Intel IbexPeak HDMI / IDT 92HD81B1X5 internal mic

[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=F1734
[root@osprey wolfdogg]# rmmod snd-hda-intel -f && modprobe snd-hda-intel
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#2 | grep Codec
cat: /proc/asound/card0/codec#2: No such file or directory
[root@osprey wolfdogg]# cat /proc/asound/card0/codec#1 | grep Codec
cat: /proc/asound/card0/codec#1: No such file or directory
[root@osprey wolfdogg]# cat /proc/asound/card0/codec
cat: /proc/asound/card0/codec: No such file or directory
[root@osprey wolfdogg]# cd /proc/asound/card
card0/ cards  
[root@osprey wolfdogg]# cd /proc/asound/card
card0/ cards  
[root@osprey wolfdogg]# cd /proc/asound/card0/
[root@osprey card0]# ll
total 0
-r--r--r-- 1 root root 0 Apr  6 00:49 codec#0
-r--r--r-- 1 root root 0 Apr  6 00:49 codec#3
-rw-r--r-- 1 root root 0 Apr  6 00:49 eld#3.0
-r--r--r-- 1 root root 0 Apr  6 00:49 id
dr-xr-xr-x 3 root root 0 Apr  6 00:49 pcm0c
dr-xr-xr-x 3 root root 0 Apr  6 00:49 pcm0p
dr-xr-xr-x 3 root root 0 Apr  6 00:49 pcm3p
[root@osprey card0]# cat /proc/asound/card0/codec#0 | grep Codec
Codec: IDT 92HD81B1X5
[root@osprey card0]# cat /proc/asound/card0/codec#3 | grep Codec
Codec: Intel IbexPeak HDMI
[root@osprey card0]# cat /proc/asound/card0/eld#3.0 | grep Codec
[root@osprey card0]# cat /proc/asound/card0/id | grep Codec
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec
cat: /proc/asound/card0/pcm0c: Is a directory
[root@osprey card0]# cat /proc/asound/card0/pcm0c | grep Codec
cat: /proc/asound/card0/pcm0c: Is a directory
[root@osprey card0]# pacman -S gstreamer0.10-plugins
:: There are 5 members in group gstreamer0.10-plugins:
:: Repository extra
   1) gstreamer0.10-bad-plugins  2) gstreamer0.10-base-plugins  3) gstreamer0.10-ffmpeg
   4) gstreamer0.10-good-plugins  5) gstreamer0.10-ugly-plugins   

Enter a selection (default=all): 
warning: gstreamer0.10-bad-plugins-0.10.23-3 is up to date -- reinstalling
warning: gstreamer0.10-base-plugins-0.10.36-1 is up to date -- reinstalling
warning: gstreamer0.10-ffmpeg-0.10.13-1 is up to date -- reinstalling
resolving dependencies...
looking for inter-conflicts...

Targets (10): gstreamer0.10-ugly-0.10.19-5  libavc1394-0.5.4-1  libiec61883-1.2.0-3
             libsidplay-1.36.59-5  wavpack-4.60.1-2  gstreamer0.10-bad-plugins-0.10.23-3
             gstreamer0.10-base-plugins-0.10.36-1  gstreamer0.10-ffmpeg-0.10.13-1
             gstreamer0.10-good-plugins-0.10.31-1  gstreamer0.10-ugly-plugins-0.10.19-5

Total Download Size:    1.00 MiB
Total Installed Size:   13.08 MiB
Net Upgrade Size:       3.63 MiB

Proceed with installation? [Y/n] 
:: Retrieving packages from extra...
gstreamer0.10-base-plugi...   165.3 KiB   963K/s 00:00 [#############################] 100%
libavc1394-0.5.4-1-x86_64      32.0 KiB   759K/s 00:00 [#############################] 100%
libiec61883-1.2.0-3-x86_64     37.3 KiB   829K/s 00:00 [#############################] 100%
wavpack-4.60.1-2-x86_64       113.7 KiB   921K/s 00:00 [#############################] 100%
gstreamer0.10-good-plugi...   327.3 KiB  1124K/s 00:00 [#############################] 100%
gstreamer0.10-ugly-0.10....   160.4 KiB   908K/s 00:00 [#############################] 100%
libsidplay-1.36.59-5-x86_64   107.8 KiB   771K/s 00:00 [#############################] 100%
gstreamer0.10-ugly-plugi...    84.4 KiB   727K/s 00:00 [#############################] 100%
(10/10) checking package integrity                      [#############################] 100%
(10/10) loading package files                           [#############################] 100%
(10/10) checking for file conflicts                     [#############################] 100%
(10/10) checking available disk space                   [#############################] 100%
( 1/10) upgrading gstreamer0.10-bad-plugins             [#############################] 100%
( 2/10) upgrading gstreamer0.10-base-plugins            [#############################] 100%
( 3/10) upgrading gstreamer0.10-ffmpeg                  [#############################] 100%
( 4/10) installing libavc1394                           [#############################] 100%
( 5/10) installing libiec61883                          [#############################] 100%
( 6/10) installing wavpack                              [#############################] 100%
( 7/10) installing gstreamer0.10-good-plugins           [#############################] 100%

(gconftool-2:5520): GConf-WARNING **: Client failed to connect to the D-BUS daemon:
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the        reply, the reply timeout expired, or the network connection was broken. 
( 8/10) installing gstreamer0.10-ugly                   [#############################] 100%
( 9/10) installing libsidplay                           [#############################] 100%
(10/10) installing gstreamer0.10-ugly-plugins           [#############################] 100%
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto
[root@osprey card0]# rmmod snd-hda-intel -f && modprobe snd-hda-intel model=auto

Git Remote Development

Configure remote

Set up remote repos

  • Set up the bare repo and public served repo (for websites)
$ ssh user@remote
# mkdir /home/user/git/site.com.git (or /var/www/git/site.com etc.. if not shared hosting.)
# cd /home/user/git/site.com.git
# git init --bare --shared (or not shared)
  • Choose only option a or option b in the following steps
  • Now chooose one of the following, either a or b, dont do them both.
    • a) If you want to always edit files locally and not have to do a pull from served location, which pretty much automates the file uploads upon push.
    • b) If you want the ability to also edit files directly on the served location as well then you might want to keep them in their own git repository, therefore you will be cloning the bare repo out to the sites served directory. Each time you do a push from your local, you will then need to do a pull from this remote cloned repo after each commit before you will see your changes live.

a) Post-receive hooks for automation

First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server

  • Make post-receive hook in bare repo which will ship files automatically into its served location
# cat > hooks/post-receive
#!/bin/sh
GIT_WORK_TREE=/home/soldiert/public_html/project.com/ git checkout -f 
// (press ctrl_d to exit and save)
# chmod +x hooks/post-receive (same path as what you put in hooks/post-receive)
# mkdir /home/user/git/site.com
# exit

b) Clone for more control=

First read above, only do step a) or b), but not both. you have to make a choice depending on how you plan to use your remote server

  • Clone remote bare into directory that it will be served out of to gain more control over editing from both locations.
# cd /home/user/public_html
# clone /home/user/git/site.com.git 
# exit

Start tracking on local

  • Start tracking on local if you haven't already
$ cd ~/projects/site.com
# git --init

Now set up connection to remote origin on your local git repo

$ cd ~/projects/site.com
$ git remote origin -v 
  • Add your bare repo as the new origin.
  • Note, if you your origin is already in tact then skip the remove and add origin steps below
$ git remote rm origin (if its still attached to git hub or someplace else that you dont want the files going)
$ git remote add origin user@remote:/home/user/public_html/site.com/.git
  • Push your files
$ git push origin master (will push to bare repo, then hook will check it out to served location. 

Ready to develop on local

$ cd ~/projects/site.com
$ vim index.html
$ git add -A
$ git commit -am 'first commit message'
$ git status
$ git log

After commiting run the following

$ git push
  • Your'e done.
  • If you chose option b then you now you need to run the following commands
$ ssh user@remote
# cd /home/user/public_html/site.com
# git pull
  • Your'e done.

PHP X-Debug

This topic covers installing X-Debug on a LAMP server.

Installation

 pacman -S xdebug

Configuration

  • add the following line to your php.ini on the bottom of the extensions list
    zend_extension="/lib64/php/modules/xdebug.so"
  • or use the non 64 bit one if needed
    zend_extension="/lib/php/modules/xdebug.so"
  • restart your server
   systemctl restart httpd
  • ensure x-debug is enabled in phpinfo
   php -i | grep xdebug | less

PHPStorm usage

This step details configuring xdebug for use in development on a development machine, separate from the LAMP server, which has PHP-Storm installed.

References

xdebug docs http://xdebug.org/docs/

xdebug checker http://xdebug.org/find-binary.php

troubleshooting info http://stackoverflow.com/questions/20752260/trouble-setting-up-and-debugging-php-storm-project-from-existing-files-in-mounte

phpstorm zero configuration http://blog.jetbrains.com/phpstorm/2013/07/webinar-recording-debugging-php-with-phpstorm/

phpstorm configurations https://www.jetbrains.com/phpstorm/webhelp/configuring-xdebug.html