https://wiki.archlinux.org/api.php?action=feedcontributions&user=Ephreal&feedformat=atomArchWiki - User contributions [en]2024-03-28T19:47:21ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Saltstack&diff=584260Saltstack2019-10-01T21:27:54Z<p>Ephreal: /* Components of Salt Stack */ Replaced systemctl commands with start and enable</p>
<hr />
<div>[[Category:System administration]]<br />
[[ja:Saltstack]]<br />
{{Style|Don't show systemctl commands.}}<br />
From [http://docs.saltstack.com/ docs.saltstack.com]:<br />
<br />
:Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.<br />
:Salt delivers a dynamic communication bus for instrastructures that can be used for orchestration, remote execution, configuration management and much more.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{pkg|salt}} package.<br />
<br />
== Components of Salt Stack ==<br />
<br />
Salt is at its core a Remote Execution solution. Running pre-defined or arbitrary commands on remote hosts. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients (called minions), and the minions connect back to the master.<br />
<br />
=== Salt Master ===<br />
<br />
The default configuration is suitable for the vast majority of installations. [[Start]] and [[Enable]] the '''salt-master''' service.<br />
<br />
The Salt master can also be started in the foreground in debug mode, greatly increasing the command output:<br />
# salt-master -l debug<br />
<br />
The Salt master needs to bind to 2 TCP network ports on the system, these ports are 4505 and 4506.<br />
<br />
=== Salt Minion ===<br />
<br />
The Salt Minion can operate with or without a Salt Master. This wiki assumes that the minion will be connected to the master. For information on how to run a master-less minion please see the masterless quickstart guide: http://docs.saltstack.com/topics/tutorials/quickstart.html<br />
<br />
The Salt minion only needs to be aware of one piece of information to run, the network location of the master. By default the minion will look for the DNS name '''salt''' for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise the minion configuration file will need to be edited, edit the configuration option '''master''' to point to the DNS name or the IP of the Salt Master.<br />
<br />
{{hc|/etc/salt/minion|<br />
master: saltmaster.example.com}}<br />
<br />
Now that the master can be found, [[Start]] and [[Enable]] the '''salt-minion''' service.<br />
<br />
Or to run in debug mode<br />
# salt-minion -l debug<br />
<br />
=== Salt Key ===<br />
<br />
Salt authenticates minion using public key encryption and authentication. For a minion to start accepting commands from the master the minion keys need to be accepted. The '''salt-key''' command is used to manage all of the keys on the master. To list the keys that are on the master run salt-key list command:<br />
# salt-key -L<br />
<br />
The keys that have been rejected, accepted and pending acceptance are listed. To accept a minion:<br />
# salt-key -a minion.example.com<br />
<br />
Or you can accept all keys at once with :<br />
# salt-key -A<br />
<br />
=== Salt Cloud ===<br />
<br />
Salt can also be used to provision cloud servers on most major cloud providers. In order to connect to these providers, additional dependencies may be required. {{Pkg|python2-apache-libcloud}}{{Broken package link|package not found}} is required for many popular providers such as Rackspace and Amazon, and can be found in the community repositories. Further details for configuring your cloud provider can be found at the official wiki: http://docs.saltstack.com/en/latest/topics/cloud/<br />
<br />
== Salt commands ==<br />
<br />
After connecting and accepting the minion on the Salt master you can now send commands to the minion. Salt commands allow for a vast set of functions to be executed and for specific minion and groups of minions to be targeted for execution. This makes the '''salt''' command very powerful, but the command is also very usable, and easy to understand.<br />
<br />
The '''salt''' command is compromised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this:<br />
# salt '*' test.ping<br />
<br />
The '''*''' is the target, which specifies all minions, and '''test.ping''' tells the minions to run the '''test.ping''' function. This '''salt''' command will tell all of the minions to execute the '''test.ping''' in parallel and return the result.<br />
<br />
For more commands see documentation or run:<br />
# salt '*' sys.doc<br />
<br />
==Salt States==<br />
<br />
In addition to running commands, salt can use what are known as states. A state is like a configuration file that allows setting up a new installation in the exact same way. A state can also be ran on that install after several weeks to make sure the computer is still in a known configuration.<br />
<br />
===Salt Environments===<br />
States can be separated into different environments. These environments can be used for making changes in a test environment before moving to a production machine, configuring a group of servers the same way, etc. The base environment is /srv/salt by default, and sometimes /srv/salt must be manually created.<br />
<br />
Different environments can be set up in the salt-master file. Check /etc/salt/master for more info.<br />
<br />
===Creating a State===<br />
A state is a text file ending in *.sls located within a configured environment. This assumes the only the default base environment set up.<br />
<br />
Create a file in /srv/salt called test.sls.<br />
# vim /srv/salt/test.sls<br />
<br />
Add the following to the file:<br />
<br />
netcat:<br />
pkg.installed: []<br />
<br />
Now run the state:<br />
<br />
# salt '*' state.apply test<br />
<br />
Salt will search the base environment folder for anything called test.sls and apply the configuration it finds to all servers. In this case, '''netcat''' will be installed on all servers.<br />
<br />
For more information on state file syntax and using states, see here: https://docs.saltstack.com/en/latest/topics/tutorials/starting_states.html<br />
<br />
===The top file===<br />
<br />
The top file is the main way to apply different configs to different servers at once. The top file is called '''top.sls''' and is placed in the root of an environment. The top file configuration can be ran with the following command.<br />
<br />
# salt '*' state.apply<br />
<br />
Let us assume we have 2 servers: fs01, web01. Let's also assume we have 3 states in the base environment: nettools.sls, samba.sls, apache.sls. Here is a sample top file.<br />
<br />
# Applied to all servers<br />
'*':<br />
nettools<br />
<br />
# Applied only to fs01<br />
fs01:<br />
samba<br />
<br />
# Applied only to web01<br />
web01:<br />
apache<br />
<br />
When '''state.apply''' is ran, the top file is read, and the states are applied to the correct servers. IE: nettools on all servers, samba on fs01, apache on web01.<br />
<br />
===Scheduling Tasks===<br />
<br />
Enable the salt scheduler on the minion with<br />
<br />
salt 'minion-name' schedule.enable<br />
<br />
and [[Install]] {{pkg|python2-dateutil}} on the master and any minions that will be using the scheduler and restart the salt-minion service on that server. Remember, you can easily install {{pkg|python2-dateutil}} and restart the salt-minion service on all minions using a state or a salt '*' command.<br />
<br />
Assume samba.sls, stored in /srv/salt, needs to be run every Monday on fs01. This can be accomplished by placing the following into a state file and running it.<br />
<br />
configure_samba_daily:<br />
schedule.present:<br />
- function: state.sls<br />
- job_args:<br />
- samba<br />
- when:<br />
- Monday 5:00am<br />
<br />
Run<br />
<br />
salt 'minion-name' schedule.list<br />
<br />
to verify the job was placed on the schedule.<br />
<br />
<br />
A point to note. In the config file above, specifying state.sls for the function is how you specify job_args is receiving a state called samba. Do NOT try substituting state.sls with samba.sls or any other sls file. Function simply tells the scheduler how to treat jobs_args.<br />
<br />
For more details on configuring schedules, see https://docs.saltstack.com/en/latest/ref/states/all/salt.states.schedule.html<br />
<br />
==See also==<br />
* http://docs.saltstack.com/ - Official documentation</div>Ephrealhttps://wiki.archlinux.org/index.php?title=Saltstack&diff=584259Saltstack2019-10-01T21:09:47Z<p>Ephreal: /* Scheduling Tasks */ schedule.enable was missing and is necessary for using the scheduler.</p>
<hr />
<div>[[Category:System administration]]<br />
[[ja:Saltstack]]<br />
{{Style|Don't show systemctl commands.}}<br />
From [http://docs.saltstack.com/ docs.saltstack.com]:<br />
<br />
:Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.<br />
:Salt delivers a dynamic communication bus for instrastructures that can be used for orchestration, remote execution, configuration management and much more.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{pkg|salt}} package.<br />
<br />
== Components of Salt Stack ==<br />
<br />
Salt is at its core a Remote Execution solution. Running pre-defined or arbitrary commands on remote hosts. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients (called minions), and the minions connect back to the master.<br />
<br />
=== Salt Master ===<br />
<br />
Turning on the Salt master is easy, just turn it on! The default configuration is suitable for the vast majority of installations. The Salt master can be controlled with systemd.<br />
# systemctl start salt-master<br />
<br />
The Salt master can also be started in the foreground in debug mode, thus greatly increasing the command output:<br />
# salt-master -l debug<br />
<br />
The Salt master needs to bind to 2 TCP network ports on the system, these ports are 4505 and 4506.<br />
<br />
=== Salt Minion ===<br />
<br />
The Salt Minion can operate with or without a Salt Master. This wiki assumes that the minion will be connected to the master. For information on how to run a master-less minion please see the masterless quickstart guide: http://docs.saltstack.com/topics/tutorials/quickstart.html<br />
<br />
The Salt minion only needs to be aware of one piece of information to run, the network location of the master. By default the minion will look for the DNS name '''salt''' for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise the minion configuration file will need to be edited, edit the configuration option '''master''' to point to the DNS name or the IP of the Salt Master.<br />
<br />
{{hc|/etc/salt/minion|<br />
master: saltmaster.example.com}}<br />
<br />
Now that the master can be found, start the minion in the same way as the master; with systemd.<br />
# systemctl start salt-minion<br />
<br />
Or in debug mode<br />
# salt-minion -l debug<br />
<br />
=== Salt Key ===<br />
<br />
Salt authenticates minion using public key encryption and authentication. For a minion to start accepting commands from the master the minion keys need to be accepted. The '''salt-key''' command is used to manage all of the keys on the master. To list the keys that are on the master run salt-key list command:<br />
# salt-key -L<br />
<br />
The keys that have been rejected, accepted and pending acceptance are listed. To accept a minion:<br />
# salt-key -a minion.example.com<br />
<br />
Or you can accept all keys at once with :<br />
# salt-key -A<br />
<br />
=== Salt Cloud ===<br />
<br />
Salt can also be used to provision cloud servers on most major cloud providers. In order to connect to these providers, additional dependencies may be required. {{Pkg|python2-apache-libcloud}}{{Broken package link|package not found}} is required for many popular providers such as Rackspace and Amazon, and can be found in the community repositories. Further details for configuring your cloud provider can be found at the official wiki: http://docs.saltstack.com/en/latest/topics/cloud/<br />
<br />
== Salt commands ==<br />
<br />
After connecting and accepting the minion on the Salt master you can now send commands to the minion. Salt commands allow for a vast set of functions to be executed and for specific minion and groups of minions to be targeted for execution. This makes the '''salt''' command very powerful, but the command is also very usable, and easy to understand.<br />
<br />
The '''salt''' command is compromised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this:<br />
# salt '*' test.ping<br />
<br />
The '''*''' is the target, which specifies all minions, and '''test.ping''' tells the minions to run the '''test.ping''' function. This '''salt''' command will tell all of the minions to execute the '''test.ping''' in parallel and return the result.<br />
<br />
For more commands see documentation or run:<br />
# salt '*' sys.doc<br />
<br />
==Salt States==<br />
<br />
In addition to running commands, salt can use what are known as states. A state is like a configuration file that allows setting up a new installation in the exact same way. A state can also be ran on that install after several weeks to make sure the computer is still in a known configuration.<br />
<br />
===Salt Environments===<br />
States can be separated into different environments. These environments can be used for making changes in a test environment before moving to a production machine, configuring a group of servers the same way, etc. The base environment is /srv/salt by default, and sometimes /srv/salt must be manually created.<br />
<br />
Different environments can be set up in the salt-master file. Check /etc/salt/master for more info.<br />
<br />
===Creating a State===<br />
A state is a text file ending in *.sls located within a configured environment. This assumes the only the default base environment set up.<br />
<br />
Create a file in /srv/salt called test.sls.<br />
# vim /srv/salt/test.sls<br />
<br />
Add the following to the file:<br />
<br />
netcat:<br />
pkg.installed: []<br />
<br />
Now run the state:<br />
<br />
# salt '*' state.apply test<br />
<br />
Salt will search the base environment folder for anything called test.sls and apply the configuration it finds to all servers. In this case, '''netcat''' will be installed on all servers.<br />
<br />
For more information on state file syntax and using states, see here: https://docs.saltstack.com/en/latest/topics/tutorials/starting_states.html<br />
<br />
===The top file===<br />
<br />
The top file is the main way to apply different configs to different servers at once. The top file is called '''top.sls''' and is placed in the root of an environment. The top file configuration can be ran with the following command.<br />
<br />
# salt '*' state.apply<br />
<br />
Let us assume we have 2 servers: fs01, web01. Let's also assume we have 3 states in the base environment: nettools.sls, samba.sls, apache.sls. Here is a sample top file.<br />
<br />
# Applied to all servers<br />
'*':<br />
nettools<br />
<br />
# Applied only to fs01<br />
fs01:<br />
samba<br />
<br />
# Applied only to web01<br />
web01:<br />
apache<br />
<br />
When '''state.apply''' is ran, the top file is read, and the states are applied to the correct servers. IE: nettools on all servers, samba on fs01, apache on web01.<br />
<br />
===Scheduling Tasks===<br />
<br />
Enable the salt scheduler on the minion with<br />
<br />
salt 'minion-name' schedule.enable<br />
<br />
and [[Install]] {{pkg|python2-dateutil}} on the master and any minions that will be using the scheduler and restart the salt-minion service on that server. Remember, you can easily install {{pkg|python2-dateutil}} and restart the salt-minion service on all minions using a state or a salt '*' command.<br />
<br />
Assume samba.sls, stored in /srv/salt, needs to be run every Monday on fs01. This can be accomplished by placing the following into a state file and running it.<br />
<br />
configure_samba_daily:<br />
schedule.present:<br />
- function: state.sls<br />
- job_args:<br />
- samba<br />
- when:<br />
- Monday 5:00am<br />
<br />
Run<br />
<br />
salt 'minion-name' schedule.list<br />
<br />
to verify the job was placed on the schedule.<br />
<br />
<br />
A point to note. In the config file above, specifying state.sls for the function is how you specify job_args is receiving a state called samba. Do NOT try substituting state.sls with samba.sls or any other sls file. Function simply tells the scheduler how to treat jobs_args.<br />
<br />
For more details on configuring schedules, see https://docs.saltstack.com/en/latest/ref/states/all/salt.states.schedule.html<br />
<br />
==See also==<br />
* http://docs.saltstack.com/ - Official documentation</div>Ephrealhttps://wiki.archlinux.org/index.php?title=Saltstack&diff=573026Saltstack2019-05-12T15:03:18Z<p>Ephreal: Added in a brief saltstack schedule section. Some schedule configuration options were not entirely clear in existing documentation. This should allow someone to get started with less pain than I had.</p>
<hr />
<div>[[Category:System administration]]<br />
[[ja:Saltstack]]<br />
{{Style|Don't show systemctl commands.}}<br />
From [http://docs.saltstack.com/ docs.saltstack.com]:<br />
<br />
:Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.<br />
:Salt delivers a dynamic communication bus for instrastructures that can be used for orchestration, remote execution, configuration management and much more.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{pkg|salt}} package.<br />
<br />
== Components of Salt Stack ==<br />
<br />
Salt is at its core a Remote Execution solution. Running pre-defined or arbitrary commands on remote hosts. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients (called minions), and the minions connect back to the master.<br />
<br />
=== Salt Master ===<br />
<br />
Turning on the Salt master is easy, just turn it on! The default configuration is suitable for the vast majority of installations. The Salt master can be controlled with systemd.<br />
# systemctl start salt-master<br />
<br />
The Salt master can also be started in the foreground in debug mode, thus greatly increasing the command output:<br />
# salt-master -l debug<br />
<br />
The Salt master needs to bind to 2 TCP network ports on the system, these ports are 4505 and 4506.<br />
<br />
=== Salt Minion ===<br />
<br />
The Salt Minion can operate with or without a Salt Master. This wiki assumes that the minion will be connected to the master. For information on how to run a master-less minion please see the masterless quickstart guide: http://docs.saltstack.com/topics/tutorials/quickstart.html<br />
<br />
The Salt minion only needs to be aware of one piece of information to run, the network location of the master. By default the minion will look for the DNS name '''salt''' for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise the minion configuration file will need to be edited, edit the configuration option '''master''' to point to the DNS name or the IP of the Salt Master.<br />
<br />
{{hc|/etc/salt/minion|<br />
master: saltmaster.example.com}}<br />
<br />
Now that the master can be found, start the minion in the same way as the master; with systemd.<br />
# systemctl start salt-minion<br />
<br />
Or in debug mode<br />
# salt-minion -l debug<br />
<br />
=== Salt Key ===<br />
<br />
Salt authenticates minion using public key encryption and authentication. For a minion to start accepting commands from the master the minion keys need to be accepted. The '''salt-key''' command is used to manage all of the keys on the master. To list the keys that are on the master run salt-key list command:<br />
# salt-key -L<br />
<br />
The keys that have been rejected, accepted and pending acceptance are listed. To accept a minion:<br />
# salt-key -a minion.example.com<br />
<br />
Or you can accept all keys at once with :<br />
# salt-key -A<br />
<br />
=== Salt Cloud ===<br />
<br />
Salt can also be used to provision cloud servers on most major cloud providers. In order to connect to these providers, additional dependencies may be required. {{Pkg|python2-apache-libcloud}}{{Broken package link|package not found}} is required for many popular providers such as Rackspace and Amazon, and can be found in the community repositories. Further details for configuring your cloud provider can be found at the official wiki: http://docs.saltstack.com/en/latest/topics/cloud/<br />
<br />
== Salt commands ==<br />
<br />
After connecting and accepting the minion on the Salt master you can now send commands to the minion. Salt commands allow for a vast set of functions to be executed and for specific minion and groups of minions to be targeted for execution. This makes the '''salt''' command very powerful, but the command is also very usable, and easy to understand.<br />
<br />
The '''salt''' command is compromised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this:<br />
# salt '*' test.ping<br />
<br />
The '''*''' is the target, which specifies all minions, and '''test.ping''' tells the minions to run the '''test.ping''' function. This '''salt''' command will tell all of the minions to execute the '''test.ping''' in parallel and return the result.<br />
<br />
For more commands see documentation or run:<br />
# salt '*' sys.doc<br />
<br />
==Salt States==<br />
<br />
In addition to running commands, salt can use what are known as states. A state is like a configuration file that allows setting up a new installation in the exact same way. A state can also be ran on that install after several weeks to make sure the computer is still in a known configuration.<br />
<br />
===Salt Environments===<br />
States can be separated into different environments. These environments can be used for making changes in a test environment before moving to a production machine, configuring a group of servers the same way, etc. The base environment is /srv/salt by default, and sometimes /srv/salt must be manually created.<br />
<br />
Different environments can be set up in the salt-master file. Check /etc/salt/master for more info.<br />
<br />
===Creating a State===<br />
A state is a text file ending in *.sls located within a configured environment. This assumes the only the default base environment set up.<br />
<br />
Create a file in /srv/salt called test.sls.<br />
# vim /srv/salt/test.sls<br />
<br />
Add the following to the file:<br />
<br />
netcat:<br />
pkg.installed: []<br />
<br />
Now run the state:<br />
<br />
# salt '*' state.apply test<br />
<br />
Salt will search the base environment folder for anything called test.sls and apply the configuration it finds to all servers. In this case, '''netcat''' will be installed on all servers.<br />
<br />
For more information on state file syntax and using states, see here: https://docs.saltstack.com/en/latest/topics/tutorials/starting_states.html<br />
<br />
===The top file===<br />
<br />
The top file is the main way to apply different configs to different servers at once. The top file is called '''top.sls''' and is placed in the root of an environment. The top file configuration can be ran with the following command.<br />
<br />
# salt '*' state.apply<br />
<br />
Let us assume we have 2 servers: fs01, web01. Let's also assume we have 3 states in the base environment: nettools.sls, samba.sls, apache.sls. Here is a sample top file.<br />
<br />
# Applied to all servers<br />
'*':<br />
nettools<br />
<br />
# Applied only to fs01<br />
fs01:<br />
samba<br />
<br />
# Applied only to web01<br />
web01:<br />
apache<br />
<br />
When '''state.apply''' is ran, the top file is read, and the states are applied to the correct servers. IE: nettools on all servers, samba on fs01, apache on web01.<br />
<br />
===Scheduling Tasks===<br />
<br />
[[Install]] {{pkg|python2-dateutil}} on the master and any minions that will be using the scheduler and restart the salt-minion service on that server. Remember, you can easily install {{pkg|python2-dateutil}} and restart the salt-minion service on all minions using a state or a salt '*' command.<br />
<br />
Assume samba.sls, stored in /srv/salt, needs to be run every Monday on fs01. This can be accomplished by placing the following into a state file and running it.<br />
<br />
configure_samba_daily:<br />
schedule.present:<br />
- function: state.sls<br />
- job_args:<br />
- samba<br />
- when:<br />
- Monday 5:00am<br />
<br />
A point to note. In the config file above, specifying state.sls for the function is how you specify job_args is receiving a state called samba. Do NOT try substituting state.sls with samba.sls or any other sls file. Function simply tells the scheduler how to treat jobs_args.<br />
<br />
For more details on configuring schedules, see https://docs.saltstack.com/en/latest/ref/states/all/salt.states.schedule.html<br />
<br />
<br />
==See also==<br />
* http://docs.saltstack.com/ - Official documentation</div>Ephrealhttps://wiki.archlinux.org/index.php?title=Saltstack&diff=547048Saltstack2018-10-10T16:26:18Z<p>Ephreal: Saltstack state information. Environment/state location and usage was hard to find at first</p>
<hr />
<div>[[Category:System administration]]<br />
[[ja:Saltstack]]<br />
From [http://docs.saltstack.com/ docs.saltstack.com]:<br />
<br />
:''Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.''<br />
:''Salt delivers a dynamic communication bus for instrastructures that can be used for orchestration, remote execution, configuration management and much more.''<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{pkg|salt}} package.<br />
<br />
== Components of Salt Stack ==<br />
<br />
Salt is at its core a Remote Execution solution. Running pre-defined or arbitrary commands on remote hosts. Salt functions on a master/minion topology. A master server acts as a central control bus for the clients (called minions), and the minions connect back to the master.<br />
<br />
=== Salt Master ===<br />
<br />
Turning on the Salt master is easy, just turn it on! The default configuration is suitable for the vast majority of installations. The Salt master can be controlled with systemd.<br />
# systemctl start salt-master<br />
<br />
The Salt master can also be started in the foreground in debug mode, thus greatly increasing the command output:<br />
# salt-master -l debug<br />
<br />
The Salt master needs to bind to 2 TCP network ports on the system, these ports are 4505 and 4506.<br />
<br />
=== Salt Minion ===<br />
<br />
The Salt Minion can operate with or without a Salt Master. This wiki assumes that the minion will be connected to the master. For information on how to run a master-less minion please see the masterless quickstart guide: http://docs.saltstack.com/topics/tutorials/quickstart.html<br />
<br />
The Salt minion only needs to be aware of one piece of information to run, the network location of the master. By default the minion will look for the DNS name '''salt''' for the master, making the easiest approach to set internal DNS to resolve the name salt back to the Salt Master IP. Otherwise the minion configuration file will need to be edited, edit the configuration option '''master''' to point to the DNS name or the IP of the Salt Master.<br />
<br />
{{hc|/etc/salt/minion|<br />
master: saltmaster.example.com}}<br />
<br />
Now that the master can be found, start the minion in the same way as the master; with systemd.<br />
# systemctl start salt-minion<br />
<br />
Or in debug mode<br />
# salt-minion -l debug<br />
<br />
=== Salt Key ===<br />
<br />
Salt authenticates minion using public key encryption and authentication. For a minion to start accepting commands from the master the minion keys need to be accepted. The '''salt-key''' command is used to manage all of the keys on the master. To list the keys that are on the master run salt-key list command:<br />
# salt-key -L<br />
<br />
The keys that have been rejected, accepted and pending acceptance are listed. To accept a minion:<br />
# salt-key -a minion.example.com<br />
<br />
Or you can accept all keys at once with :<br />
# salt-key -A<br />
<br />
=== Salt Cloud ===<br />
<br />
Salt can also be used to provision cloud servers on most major cloud providers. In order to connect to these providers, additional dependencies may be required. {{Pkg|python2-apache-libcloud}} is required for many popular providers such as Rackspace and Amazon, and can be found in the community repositories. Further details for configuring your cloud provider can be found at the official wiki: http://docs.saltstack.com/en/latest/topics/cloud/<br />
<br />
== Salt commands ==<br />
<br />
After connecting and accepting the minion on the Salt master you can now send commands to the minion. Salt commands allow for a vast set of functions to be executed and for specific minion and groups of minions to be targeted for execution. This makes the '''salt''' command very powerful, but the command is also very usable, and easy to understand.<br />
<br />
The '''salt''' command is compromised of command options, target specification, the function to execute, and arguments to the function. A simple command to start with looks like this:<br />
# salt '*' test.ping<br />
<br />
The '''*''' is the target, which specifies all minions, and '''test.ping''' tells the minions to run the '''test.ping''' function. This '''salt''' command will tell all of the minions to execute the '''test.ping''' in parallel and return the result.<br />
<br />
For more commands see documentation or run:<br />
# salt '*' sys.doc<br />
<br />
==Salt States==<br />
<br />
In addition to running commands, salt can use what are known as states. A state is like a configuration file that allows setting up a new installation in the exact same way. A state can also be ran on that install after several weeks to make sure the computer is still in a known configuration.<br />
<br />
===Salt Environments===<br />
States can be separated into different environments. These environments can be used for making changes in a test environment before moving to a production machine, configuring a group of servers the same way, etc. The base environment is /srv/salt by default, and sometimes /srv/salt must be manually created.<br />
<br />
Different environments can be set up in the salt-master file. Check /etc/salt/master for more info.<br />
<br />
===Creating a State===<br />
A state is a text file ending in *.sls located within a configured environment. This assumes the only the default base environment set up.<br />
<br />
Create a file in /srv/salt called test.sls.<br />
# vim /srv/salt/test.sls<br />
<br />
Add the following to the file:<br />
<br />
netcat:<br />
pkg.installed: []<br />
<br />
Now run the state:<br />
<br />
# salt '*' state.apply test<br />
<br />
Salt will search the base environment folder for anything called test.sls and apply the configuration it finds to all servers. In this case, '''netcat''' will be installed on all servers.<br />
<br />
For more information on state file syntax and using states, see here: https://docs.saltstack.com/en/latest/topics/tutorials/starting_states.html<br />
<br />
===The top file===<br />
<br />
The top file is the main way to apply different configs to different servers at once. The top file is called '''top.sls''' and is placed in the root of an environment. The top file configuration can be ran with the following command.<br />
<br />
# salt '*' state.apply<br />
<br />
Let us assume we have 2 servers: fs01, web01. Let's also assume we have 3 states in the base environment: nettools.sls, samba.sls, apache.sls. Here is a sample top file.<br />
<br />
# Applied to all servers<br />
'*':<br />
nettools<br />
<br />
# Applied only to fs01<br />
fs01:<br />
samba<br />
<br />
# Applied only to web01<br />
web01:<br />
apache<br />
<br />
When '''state.apply''' is ran, the top file is read, and the states are applied to the correct servers. IE: nettools on all servers, samba on fs01, apache on web01.<br />
<br />
<br />
==See also==<br />
* http://docs.saltstack.com/ - Official documentation</div>Ephrealhttps://wiki.archlinux.org/index.php?title=Talk:QEMU&diff=446205Talk:QEMU2016-08-10T14:25:47Z<p>Ephreal: Added in Windows 7 issue</p>
<hr />
<div>== Linear RAID ==<br />
<br />
When I was updating the article yesterday, I had tried to fit the section about linear raid (boot a VM from a partition by prepending a MBR to it) into the article better. But I'm not sure the technique described is the right one at all. It looks like it works, but wouldn't it be easier to install a bootloader directly to the partition (e.g. syslinux)? Then the VM could be booted directly from the partition simply by using it as its virtual disk.<br />
--[[User:Synchronicity|Synchronicity]] ([[User talk:Synchronicity|talk]]) 19:23, 9 May 2012 (UTC)<br />
<br />
== Creating bridge manually ==<br />
<br />
I really don't know what to do with this section. I'd say it has been superseded by [[QEMU#Creating bridge using qemu-bridge-helper]] (available since qemu-1.1, we now have qemu-1.5) - or is someone still using this method? Perhaps link to https://en.wikibooks.org/wiki/QEMU/Networking#TAP_interfaces or http://wiki.qemu.org/Documentation/Networking/NAT is sufficient. What do you think? -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 20:42, 22 July 2013 (UTC)<br />
<br />
:Actually, I've become a happy user of this method. I've written some scripts to easily create bridge interface, TAP interface, and combined with Xyne's [http://xyne.archlinux.ca/notes/network/dhcp_with_dns.html excellent scripts] to set up NAT and launch DHCP server, I have complete solution to easily manage multiple VMs on one (or even more) bridge.<br />
:My scripts are available on github: [https://github.com/lahwaacz/archlinux-dotfiles/blob/master/Scripts/qemu-launcher.sh], [https://github.com/lahwaacz/archlinux-dotfiles/blob/master/Scripts/qemu-tap-helper.sh], [https://github.com/lahwaacz/archlinux-dotfiles/blob/master/Scripts/qemu-mac-hasher.py] but I won't probably integrate them into the wiki, I'l just leave a note when I do some more testing.<br />
:The thing is, what to do with the current content? Personally I think that links to [https://en.wikibooks.org/wiki/QEMU/Networking#TAP_interfaces], [http://wiki.qemu.org/Documentation/Networking/NAT] and my scripts are sufficient (of course others are welcome). I'd also leave the note at the end to ''disable the firewall on the bridge'', I find it extremely useful.<br />
:-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 18:24, 5 September 2013 (UTC)<br />
<br />
:I would not remove something that works still heh.--[[User:Webdawg|Webdawg]] ([[User talk:Webdawg|talk]]) 07:31, 17 July 2016 (UTC)<br />
<br />
== Starting QEMU virtual machines with systemd ==<br />
The custom systemd service script does not work. It always fails with {{ic|Failed at step EXEC spawning /usr/bin/qemu-{type}: No such file or directory}}. To Fix this modify the ExecStart command {{bc|1=ExecStart=/usr/bin/sh -c "/usr/bin/qemu-${type} -name %i -nographic ${args}"}}<br />
Also {{ic|echo 'system_powerdown' &#124; nc localhost 7101}} kills the VM immediatly. To fix this change the stop script. It simply checks each second if the main process is still running. {{bc|1=ExecStop=/usr/bin/sh -c "${haltcmd} && while [[ $(pidof qemu-${type} | grep $MAINPID) ]]; do sleep 1; done"}}<br />
{{ic|gnu-netcat}} does not work to connect to the monitor. You need to use {{ic|openbsd-netcat}}. -- [[User:Ant32|Ant32]] ([[User talk:Ant32|talk]]) 17:48, 5 September 2013 (UTC)<br />
<br />
:The first problem related to starting the service seems rather strange - didn't you have typo error in your local {{ic|qemu@.service}} file (missing the dollar sign {{ic|$}} in {{ic|${type} }})?<br />
:The second problem is valid, systemd kills the main process when the ExecStop command exits (see {{ic|systemd.service(5)}}). If your workaround really works, it could be added to the wiki with a proper description.<br />
:-- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 21:17, 7 September 2013 (UTC)<br />
<br />
::Relevant thread on systemd-devel mailing list: [http://lists.freedesktop.org/archives/systemd-devel/2013-September/012982.html] -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 00:00, 15 September 2013 (UTC)<br />
<br />
== Kexec Hackery When Using a Real Partition ==<br />
<br />
After banging my head against a wall long enough and figuring out what {{ic|-kernel}} and {{ic|-initrd}} were really calling, I put a note above the appropriate section and mentioned two ways to use the guest's images. (Otherwise, you'll have to worry if the host and guest images match.) The first -- mount the partition(s) -- is more appropriate for "low-volume-handling" of VMs. The second -- using kexec -- becomes more useful when you're juggling more than a few VMs.<br />
<br />
I'm only mentioning this hack because (as of now) [[Kexec]] only mentions use for rebooting into another kernel, not switching out the kernel before the system is even up. This hack comes from from https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/2814988-give-option-to-use-the-droplet-s-own-bootloader- which has two suggestions. The most recent, using systemd units by jkuan, doesn't work because jkuan tried to copy a {{ic|.target}} file into a {{ic|.service}} file and systemd wants {{ic|ExecStart}} in a {{ic|.service}} file. The second one, replacing {{ic|/usr/bin/init}} by andrew_sparks, works for me on my Arch instance at DigitalOcean.<br />
<br />
Adaptation from said post:<br />
<br />
# pacman -S kexec-tools<br />
# pacman -R systemd-sysvcompat<br />
<br />
{{hc|1=/tmp/init|2=<br />
#!/bin/sh<br />
<br />
kexec --load /boot/vmlinuz-linux --initrd=/boot/initramfs-linux.img --append="root=/dev/sda init=/usr/lib/systemd/systemd" &&<br />
mount -o ro,remount / && kexec -e<br />
exec /usr/lib/systemd/systemd<br />
}}<br />
<br />
# cd [/path/to/vm]/usr/bin<br />
# mv init init.dist<br />
# cp /tmp/init ./<br />
# chmod 755 init<br />
<br />
I'm leaving this on the Talk page as I haven't even tried it out in QEMU myself. Also, my eyes are about ready to pop out of my head, so I'm barring myself from figuring out the appropriate way to edit this in for the time being. [[User:BrainwreckedTech|BrainwreckedTech]] ([[User talk:BrainwreckedTech|talk]]) 21:23, 14 January 2014 (UTC)<br />
<br />
== Replace -net with -netdev ==<br />
<br />
The {{ic|-net}} option is deprecated and replaced by {{ic|-netdev}}. I think this article should be modified to reflect that.<br />
http://en.wikibooks.org/wiki/QEMU/Networking#cite_ref-1<br />
[[User:Axper|axper]] ([[User talk:Axper|talk]]) 18:12, 1 July 2014 (UTC)<br />
<br />
== I'm rewriting the network section ==<br />
<br />
https://wiki.archlinux.org/index.php/User:Axper/sandbox/qemu_network<br />
[[User:Axper|axper]] ([[User talk:Axper|talk]]) 20:07, 2 July 2014 (UTC)<br />
<br />
::I think a lot of networking topics could be moved outside of the QEMU page. Many virtualization applications share the same basic principles with regards to networking, such as tun/tap creating, bridges, VDE, etc. There are a few networking schemes that are QEMU-specific, for example multicast sockets and {{ic|-net socket,...}}, and these could be mentioned on the QEMU page, although these are less reliable and rarely used in comparison to tap devices. We should also of course note the QEMU-specific command line options in the QEMU page, but for general concepts and commands independent of the virtualization applications, they could go on pages dedicated to the task. The best example is VDE, which is in no way limited to QEMU, yet it still doesn't have its own page on the Arch wiki.<br />
<br />
::Incidentally, I'm planning on rewriting [[User Mode Linux]] (yes, I promise I will get around to it), which happens to share the "tap with bridge" and VDE concepts with QEMU. It would be nice if I could link to pages dedicated to those topics and only write UML-specific commands in the page, instead of duplicating a bunch of general information. I'm not familiar very familiar with Xen or LXC or Docker or the like, but I would suspect that they also share some networking infrastructure. We could possibly even create a category just for these types of pages, for example "Virtual Networking" or "Advanced Networking". [[User:EscapedNull|EscapedNull]] ([[User talk:EscapedNull|talk]]) 13:32, 19 February 2015 (UTC)<br />
<br />
== -enable-kvm vs -machine type=pc,accel=kvm ==<br />
<br />
The section [[QEMU#Enabling_KVM]] recommends {{ic|-enable-kvm}}, while [[QEMU#Virtual_machine_runs_too_slowly]] recommends {{ic|1=-machine type=pc,accel=kvm}}. Is there any difference between the two? Is one preferred over the other? Should we just link to the former section from the latter (and possibly move both command line switches to the same section)? [[User:EscapedNull|EscapedNull]] ([[User talk:EscapedNull|talk]]) 17:23, 18 January 2015 (UTC)<br />
<br />
== virtio-gpu ==<br />
<br />
Any tutorial on using the new virtio-gpu which is introduced in qemu-2.4 and kernel 4.2? [[User:Adam900710|Adam900710]] ([[User talk:Adam900710|talk]]) 02:44, 19 August 2015 (UTC)<br />
<br />
== host only networking ==<br />
<br />
I added a quick and easy method but it was deleted. I found errors in what is here. Is it worth my time to correct them or will they be deleted? {{unsigned|16:39, 4 January 2016|Netskink}}<br />
<br />
:You are welcome to make any corrections. Nobody can tell you if they will be kept or reverted beforehand, but if you're afraid to waste your time feel free to just point them out using an [[Help:Template|Article status templates|article status template]] (should be less time consuming). -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 22:28, 21 February 2016 (UTC)<br />
<br />
== <s>Qemu-bridge-helper broken QENU 2.5.0-1</s> ==<br />
<br />
/etc/qemu/bridge.conf.sample does not exist, therefore the wiki entry is impossible to follow.<br />
<br />
I have tried to find this file on the internet with no sucess.<br />
<br />
--[[User:DontPanic|DontPanic]] ([[User talk:DontPanic|talk]]) 18:34, 5 April 2016 (UTC)<br />
<br />
:This appears to be {{Bug|46791}}. – [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 00:58, 7 April 2016 (UTC)<br />
<br />
::The sample file was removed deliberately from the package in [https://git.archlinux.org/svntogit/packages.git/commit/trunk?h=packages/qemu&id=c5ecbacc08a113fbdc54a0e25babb391f36cd6db] and the wiki has been updated, closing. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 08:35, 4 June 2016 (UTC)<br />
<br />
== Windows 7 specific issues ==<br />
<br />
I have noticed that for me, any attempts at installing Windows 7 using qemu with virt-manager as a frontend stalls on "Starting Windows." This is immediately after booting the computer for the first time. In Virt-manager, I am able to change the Display from QXL to Cirrus to fix the issue. I'm not sure if this applies to this page in particular, but if so, might be worth adding to the "Troubleshooting" section -- [[User:Ephreal|Ephreal]] ([[User talk:Ephreal|talk]]) 14:25, 10 August 2016 (UTC)</div>Ephrealhttps://wiki.archlinux.org/index.php?title=User_talk:Ephreal&diff=371493User talk:Ephreal2015-04-28T03:20:24Z<p>Ephreal: Created page with "Please leave feedback on my edits here. I'd like to know specifically what I can do to improve my contributions and how useful people think my edits are. ~~~~"</p>
<hr />
<div>Please leave feedback on my edits here. I'd like to know specifically what I can do to improve my contributions and how useful people think my edits are.<br />
<br />
[[User:Ephreal|Ephreal]] ([[User talk:Ephreal|talk]]) 03:20, 28 April 2015 (UTC)</div>Ephrealhttps://wiki.archlinux.org/index.php?title=Xen&diff=371491Xen2015-04-28T03:02:33Z<p>Ephreal: Added header "Creating bridge with Netctl" and created header and section "Creating bridge with Network Manager." Having used gnome's Network manager and found it finicky, I believed it a good idea to add it here for users who wish to use gnome.</p>
<hr />
<div>[[Category:Hypervisors]]<br />
[[Category:Kernel]]<br />
[[de:Xen]]<br />
[[es:Xen]]<br />
[[ja:Xen]]<br />
[[ru:Xen]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Moving an existing install into (or out of) a virtual machine}}<br />
{{Related articles end}}<br />
<br />
From [http://wiki.xen.org/wiki/Xen_Overview Xen Overview]:<br />
<br />
:''Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source. Xen is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances.''<br />
<br />
{{Warning|Do not run other virtualization software such as [[VirtualBox]] when running Xen hypervisor, it might hang your system. See this [https://www.virtualbox.org/ticket/12146 bug report (wontfix)].}}<br />
<br />
== Introduction ==<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the [http://wiki.xen.org/wiki/Dom0 dom0] (short for "domain 0", sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the ''dom0'' has started, one or more [http://wiki.xen.org/wiki/DomU domU] (short for user domains, sometimes called VMs or guests) can be started and controlled from the ''dom0''. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) ''domU''. See [http://wiki.xen.org/wiki/Xen_Overview Xen.org] for a full overview.<br />
<br />
== System requirements ==<br />
The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the {{Pkg|linux}} and {{Pkg|linux-lts}} Arch kernel packages. To run HVM ''domU'', the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:<br />
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo<br />
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM ''domU'' (or you are already running the Xen hypervisor). If you believe the CPU supports one of these features you should access the host system's BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the ''domU'' even in the absence of ''dom0'' support for the device. In order to use PCI passthrough, the CPU must support IOMMU/VT-d.<br />
<br />
== Configuring dom0 ==<br />
The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypervisor, the host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a [[Desktop environment]] or even [[Xorg]]. If you are building a new host from scratch, see the [[Installation guide]] for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working ''dom0'' running on top of the Xen hypervisor:<br />
<br />
# Installation of the Xen hypervisor<br />
# Modification of the bootloader to boot the Xen hypervisor<br />
# Creation of a network bridge<br />
# Installation of Xen systemd services<br />
<br />
=== Installation of the Xen hypervisor ===<br />
To install the Xen hypervisor install either the current stable {{AUR|xen}} or the bleeding edge unstable {{AUR|xen-git}} packages available in the [[Arch User Repository]]. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The [[multilib]] repository needs to be enabled to install Xen. Install the {{AUR|xen-docs}} package from the [[Arch User Repository]] for the man pages and documentation.<br />
<br />
==== With UEFI support ====<br />
It's possible to boot the Xen hypervisor though the bare UEFI system on a modern computer but requires you to first recompile binutils to add support for x86_64-pep emulation. Using the archway of doing things you would use the [[Arch Build System]] and add {{ic|1=--enable-targets=x86_64-pep}} to the build options of the binutils PKGBUILD file: <br />
--disable-werror '''--enable-targets=x86_64-pep'''<br />
<br />
{{Note|1=<br />
{{Accuracy|This Note is not very meaningful without a link to a bug report.}}<br />
<br />
This will not work on the newest version of binutils you will need to downgrade to an older version from the svn:<br />
{{bc|<nowiki><br />
$ svn checkout --depth empty svn://svn.archlinux.org/packages<br />
$ cd packages<br />
$ svn update -r 215066 binutils<br />
</nowiki>}}<br />
<br />
Then compile and install. See [https://nims11.wordpress.com/2013/02/17/downgrading-packages-in-arch-linux-the-worst-case-scenario/] for details of the procedure. <br />
}}<br />
<br />
The next time binutils gets updated on your system it will be overwritten with the official version again. However, you only need this change to (re-)compile the UEFI aware Xen hypervisor, it is not needed at either boot or run time.<br />
<br />
Now when you compile Xen with your x86_64-pep aware binutils a UEFI kernel will be built and installed by default. It is located at {{ic|/usr/lib/efi/xen-?.?.?.efi}} where "?" represent the version digits. The other files you find that also begin with "xen" are simply symlinks back to the real file and can be ignored. However, the efi-binary needs to be manually copied to {{ic|/boot}}, e.g.:<br />
<br />
# cp /usr/lib/efi/xen-4.4.0.efi /boot<br />
<br />
=== Modification of the bootloader ===<br />
{{Expansion|Lots of other boot loaders could/should be covered, at least the most common like Gummiboot.}}<br />
<br />
{{Warning|Never assume your system will boot after changes to the boot system. This might be the most common error new as well as old users do. Make sure you have a alternative way to boot your system like a USB stick or other livemedia '''BEFORE''' you make changes to your boot system.}}<br />
<br />
The boot loader must be modified to load a special Xen kernel ({{ic|xen.gz}} or in the case of UEFI {{ic|xen.efi}}) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.<br />
<br />
==== UEFI ====<br />
There are several ways UEFI can be involved in booting Xen but this section will cover the most simple way to get Xen to boot with help of EFI-stub.<br />
<br />
Make sure that you have compiled Xen with UEFI support enabled accoring to [[Xen#With UEFI support]]. <br />
<br />
It is possible to boot a kernel from UEFI just by placing it on the EFI partition, but since Xen at least needs to know what kernel should be booted as dom0, a minimum configuration file is required. Create or edit a {{ic|/boot/xen.cfg}} file according to system requirements, for example:<br />
<br />
{{hc|/boot/xen.cfg|<nowiki><br />
[global]<br />
default=xen<br />
<br />
[xen]<br />
options=console=vga loglvl=all noreboot<br />
kernel=vmlinuz-linux root=/dev/sda2 rw ignore_loglevel #earlyprintk=xen<br />
ramdisk=initramfs-linux.img<br />
</nowiki>}}<br />
<br />
It might be necessary to use [[UEFI#efibootmgr|efibootmgr]] to set boot order and other parameters. If booting fails, drop to the build-in [[UEFI#Launching UEFI Shell|UEFI shell]] and try to launch manually. For example: <br />
Shell> fs0:<br />
FS0:\> xen-4.4.0.efi<br />
<br />
==== GRUB ====<br />
For [[GRUB]] users, the Xen package provides the {{ic|/etc/grub.d/09_xen}} generator file. The file {{ic|/etc/xen/grub.conf}} can be edited to customize the Xen boot commands. For example, to allocate 512 MiB of RAM to ''dom0'' at boot, modify {{ic|/etc/xen/grub.conf}} by replacing the line:<br />
#XEN_HYPERVISOR_CMDLINE="xsave=1"<br />
<br />
with<br />
XEN_HYPERVISOR_CMDLINE="dom0_mem=512M xsave=1"<br />
<br />
After customizing the options, update the bootloader configuration with the following command:<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
More information on using the GRUB bootloader is available at [[GRUB]].<br />
<br />
==== Syslinux ====<br />
For [[syslinux]] users, add a stanza like this to your {{ic|/boot/syslinux/syslinux.cfg}}:<br />
LABEL xen<br />
MENU LABEL Xen<br />
KERNEL mboot.c32<br />
APPEND ../xen-X.Y.Z.gz --- ../vmlinuz-linux console=tty0 root=/dev/sdaX ro --- ../initramfs-linux.img<br />
<br />
where {{ic|X.Y.Z}} is your xen version and {{ic|/dev/sdaX}} is your [[fstab#Identifying_filesystems|root partition]].<br />
<br />
This also requires {{ic|mboot.c32}} to be in the same directory as {{ic|syslinux.cfg}}. If you do not have {{ic|mboot.c32}} in {{ic|/boot/syslinux}}, copy it from:<br />
# cp /usr/lib/syslinux/bios/mboot.c32 /boot/syslinux<br />
<br />
=== Creation of a network bridge ===<br />
<br />
{{Expansion|Only netctl out of 5 [[:Category:Network managers|network managers]] is explained}}<br />
{{Merge|Bridge with netctl}}<br />
<br />
Xen requires that network communications between ''domU'' and the ''dom0'' (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the [http://wiki.xen.org/wiki/Xen_Networking Networking] article on the Xen wiki for details and {{ic|/etc/xen/scripts}} for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in ''dom0'' that every ''domU'' is attached to, can be setup by modifying the example configuration files provided by [[Netctl]] in {{ic|etc/netctl/examples}}. By default, Xen expects a bridge to exist named {{ic|xenbr0}}. <br />
<br />
=== Creating a bridge with netctl ===<br />
<br />
To set this up with netctl, do the following:<br />
<br />
# cd /etc/netctl<br />
# cp examples/bridge xenbridge-dhcp<br />
<br />
Make the following changes to {{ic|/etc/netctl/xenbridge-dhcp}}:<br />
Description="Xen bridge connection"<br />
Interface=xenbr0<br />
Connection=bridge<br />
BindsToInterfaces=(eth0) # Use the name of the external interface found with the 'ip link' command<br />
IP=dhcp<br />
assuming your existing network connection is called {{ic|eth0}}. <br />
<br />
Start the network bridge with:<br />
# netctl start xenbridge-dhcp<br />
<br />
when the prompt returns, check all is well:<br />
{{hc|# brctl show|<br />
bridge name bridge id STP enabled interfaces<br />
xenbr0 8000.001a9206c0c0 no eth0<br />
}}<br />
<br />
If the bridge is working it can be set to start automatically after rebooting with:<br />
# netctl enable xenbridge-dhcp<br />
<br />
=== Creating bridge with Network Manager ===<br />
<br />
Gnome's Network Manager can sometime be troublesome. If following the bridge creation section outlined in the [https://wiki.archlinux.org/index.php/Network_bridge bridges] section of the wiki are unclear or do not work, then the following steps may work.<br />
<br />
Open the Network Settings and disable the interface you wish to use in your bridge (ex enp5s0). Edit the setting to off and uncheck "connect automatically."<br />
<br />
Create a new bridge connection profile by clicking on the "+" symbol in the bottom left of the network settings. Optionally, run:<br />
# nm-connection-editor<br />
<br />
to bring up the window immediately. Once the window opens, select Bridge.<br />
<br />
Click "Add" next to the "Bridged Connections" and select the interface you wished to use in your bridge (ex. Ethernet). Select the device mac address that corresponds to the interface you intend to use and save the settings<br />
<br />
If your bridge is going to receive an IP address via DHCP, leave the IPv4/IPv6 sections as they are. If DHCP is not running for this particular connection, make sure to give your bridge an IP address. Needless to say, all connections will fail if an IP address is not assigned to the bridge. If you forget to add the IP address when you first create the bridge, it can always be edited later.<br />
<br />
Now, as root, run: <br />
# nmcli con show<br />
<br />
You should see a connection that matches the name of the bridge you just created. Highlight and copy the UUID on that connection, and then run (again as root):<br />
# nmcli con up <UUID OF CONNECTION><br />
<br />
A new connection should appear under the network settings. It may take 30 seconds to a minute. To confirm that it is up and running, run:<br />
# brctl show<br />
<br />
to show a list of active bridges.<br />
<br />
Reboot. If everything works properly after a reboot (ie. bridge starts automatically), then you are all set.<br />
<br />
<optional> In your network settings, remove the connection profile on your bridge interface that does NOT connect to the bridge. This just keeps things from being confusing later on.<br />
<br />
<br />
== Installation of Xen systemd services ==<br />
The Xen ''dom0'' requires the {{ic|xenstored}}, {{ic|xenconsoled}} and {{ic|xendomains}} [[systemd#Using units|services]] to be started and possibly enabled.<br />
<br />
== Confirming successful installation ==<br />
Reboot your ''dom0'' host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up ''dom0'' should report the following when you run {{ic|xl list}} as root:<br />
{{hc|# xl list|<br />
Name ID Mem VCPUs State Time(s)<br />
Domain-0 0 511 2 r----- 41652.9}}<br />
Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that ''dom0'' is listed.<br />
<br />
In addition to the required steps above, see [http://wiki.xen.org/wiki/Xen_Best_Practices best practices for running Xen] which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for ''dom0'' use. It also may be beneficial to create a xenfs filesystem mount point by including in {{ic|/etc/fstab}}<br />
none /proc/xen xenfs defaults 0 0<br />
<br />
== Using Xen ==<br />
Xen supports both paravirtualized (PV) and hardware virtualized (HVM) ''domU''. In the following sections the steps for creating HVM and PV ''domU'' running Arch Linux are described. In general, the steps for creating an HVM ''domU'' are independent of the ''domU'' OS and HVM ''domU'' support a wide range of operating systems including Microsoft Windows. To use HVM ''domU'' the ''dom0'' hardware must have virtualization support. Paravirtualized ''domU'' do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the [http://wiki.xen.org/wiki/Category:Guest_Install Guest Install] page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV ''domU''. In general, HVM ''domU'' often run slower than PV ''domU'' since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM ''domU'', the processes are substantially different. In both cases, for each ''domU'', a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each ''domU'' will need access to a copy of the installation ISO stored on the ''dom0'' (see the [https://www.archlinux.org/download/ Download Page] to obtain the Arch Linux ISO).<br />
<br />
=== Create a domU "hard disk" ===<br />
Xen supports a number of different types of "hard disks" including [[LVM| Logical Volumes]], [[Partitioning|raw partitions]], and image files. To create a [[Wikipedia: Sparse file|sparse file]], that will grow to a maximum of 10GiB, called {{ic|domU.img}}, use:<br />
$ truncate -s 10G domU.img<br />
If file IO speed is of greater importance than domain portability, using [[LVM|Logical Volumes]] or [[Partitioning|raw partitions]] may be a better choice.<br />
<br />
Xen may present any partition / disk available to the host machine to a domain as either a partition or disk. This means that, for example, an LVM partition on the host can appear as a hard drive (and hold multiple partitions) to a domain. Note that making sub-partitons on a partition will make accessing those partitions on the host machine more difficult. See the kpartx man page for information on how to map out partitions within a partition.<br />
<br />
=== Create a domU configuration ===<br />
Each ''domU'' requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the [http://wiki.xenproject.org/wiki/XenConfigurationFileOptions Xen Wiki] or the {{ic|xl.cfg}} man page. Both HVM and PV ''domU'' share some components of the configuration file. These include<br />
<br />
name = "domU"<br />
memory = 256<br />
disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ]<br />
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]<br />
<br />
The {{ic|1=name=}} is the name by which the xl tools manage the ''domU'' and needs to be unique across all ''domU''. The {{ic|1=disk=}} includes information about both the the installation media ({{ic|file:}}) and the partition created for the ''domU'' {{ic|phy}}. If an image file is being used instead of a physical partition, the {{ic|phy:}} needs to be changed to {{ic|file:}}. The {{ic|1=vif=}} defines a network controller. The {{ic|00:16:3e}} MAC block is reserved for Xen domains, so the last three digits of the {{ic|1=mac=}} must be randomly filled in (hex values 0-9 and a-f only).<br />
<br />
=== Managing a domU ===<br />
If a ''domU'' should be started on boot, create a symlink to the configuration file in {{ic|/etc/xen/auto}} and ensure the {{ic|xendomains}} service is set up correctly. Some useful commands for managing ''domU'' are:<br />
# xl top<br />
# xl list<br />
# xl console domUname<br />
# xl shutdown domUname<br />
# xl destroy domUname<br />
<br />
== Configuring a hardware virtualized (HVM) Arch domU ==<br />
In order to use HVM ''domU'' install the {{Pkg|mesa-libgl}} and {{Pkg|bluez-libs}} packages.<br />
<br />
A minimal configuration file for a HVM Arch ''domU'' is:<br />
<br />
name = 'HVM_domU'<br />
builder = 'hvm'<br />
memory = 256<br />
vcpus = 2<br />
disk = [ 'phy:/dev/mapper/vg0-hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ]<br />
vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ]<br />
vnc = 1<br />
vnclisten = '0.0.0.0'<br />
vncdisplay = 1<br />
<br />
Since HVM machines do not have a console, they can only be connected to via a [[Vncserver|vncviewer]]. The configuration file allows for unauthenticated remote access of the ''domU'' vncserver and is not suitable for unsecured networks. The vncserver will be available on port {{ic|590X}}, where X is the value of {{ic|vncdisplay}}, of the ''dom0''. The ''domU'' can be created with:<br />
<br />
# xl create /path/to/config/file<br />
<br />
and its status can be checked with<br />
<br />
# xl list<br />
<br />
Once the ''domU'' is created, connect to it via the vncserver and install Arch Linux as described in the [[Installation guide]].<br />
<br />
== Configuring a paravirtualized (PV) Arch domU ==<br />
A minimal configuration file for a PV Arch ''domU'' is:<br />
name = "PV_domU"<br />
kernel = "/mnt/arch/boot/x86_64/vmlinuz"<br />
ramdisk = "/mnt/arch/boot/x86_64/archiso.img"<br />
extra = "archisobasedir=arch archisolabel=ARCH_201301"<br />
memory = 256<br />
disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]<br />
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]<br />
This file needs to tweaked for your specific use. Most importantly, the {{ic|1=archisolabel=ARCH_201301}} line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from {{ic|x86_64}} to {{ic|i686}}.<br />
<br />
Before creating the ''domU'', the installation ISO must be loop-mounted. To do this, ensure the directory {{ic|/mnt}} exists and is empty, then run the following command (being sure to fill in the correct ISO path):<br />
# mount -o loop /path/to/iso /mnt<br />
<br />
Once the ISO is mounted, the ''domU'' can be created with:<br />
<br />
# xl create -c /path/to/config/file<br />
<br />
The "-c" option will enter the ''domU'''s console when successfully created. Then you can install Arch Linux as described in the [[Installation guide]], but with the following deviations. The block devices listed in the disks line of the cfg file will show up as {{ic|/dev/xvd*}}. Use these devices when partitioning the ''domU''. After installation and before the ''domU'' is rebooted, the {{ic|xen-blkfront}}, {{ic|xen-fbfront}}, {{ic|xen-netfront}}, {{ic|xen-kbdfront}} modules must be added to [[Mkinitcpio]]. Without these modules, the ''domU'' will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a {{ic|grub.cfg}} file: (It may be necessary to create the {{ic|/boot/grub}} directory)<br />
{{hc|/boot/grub/grub.cfg|<nowiki>menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {<br />
insmod gzio<br />
insmod part_msdos<br />
insmod ext2<br />
set root='hd0,msdos1'<br />
if [ x$feature_platform_search_hint = xy ]; then<br />
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 __UUID__<br />
else<br />
search --no-floppy --fs-uuid --set=root __UUID__<br />
fi<br />
echo 'Loading Linux core repo kernel ...'<br />
linux /boot/vmlinuz-linux root=UUID=__UUID__ ro<br />
echo 'Loading initial ramdisk ...'<br />
initrd /boot/initramfs-linux.img<br />
}</nowiki>}}<br />
This file must be edited to match the UUID of the root partition. From within the ''domU'', run the following command:<br />
# blkid<br />
Replace all instances of {{ic|__UUID__}} with the real UUID of the root partition (the one that mounts as {{ic|/}}).:<br />
# sed -i 's/__UUID__/12345678-1234-1234-1234-123456789abcd/g' /boot/grub/grub.cfg<br />
<br />
Shutdown the ''domU'' with the {{ic|poweroff}} command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:<br />
# umount /mnt<br />
The ''domU'' cfg file should now be edited. Delete the {{ic|1=kernel =}}, {{ic|1=ramdisk =}}, and {{ic|1=extra =}} lines and replace them with the following line:<br />
bootloader = "pygrub"<br />
Also remove the ISO disk from the {{ic|1=disk =}} line.<br />
<br />
The Arch ''domU'' is now set up. It may be started with the same line as before:<br />
# xl create -c /etc/xen/archdomu.cfg<br />
<br />
== Common Errors ==<br />
<br />
=== "xl list" complains about libxl ===<br />
Either you have not booted into the Xen system, or xen modules listed in {{ic|xencommons}} script are not installed.<br />
<br />
=== "xl create" fails ===<br />
Check the guest's kernel is located correctly, check the {{ic|pv-xxx.cfg}} file for spelling mistakes (like using {{ic|initrd}} instead of {{ic|ramdisk}}).<br />
<br />
=== Arch Linux guest hangs with a ctrl-d message ===<br />
Press {{ic|ctrl-d}} until you get back to a prompt, rebuild its initramfs described <br />
<br />
=== Error message "failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory" ===<br />
This is caused by {{ic|/etc/udev/rules.d/xend.rules}}. Xend is deprecated and not used, so it is safe to remove that file.<br />
<br />
==Resources==<br />
* [http://www.xen.org/ The homepage at xen.org]<br />
* [http://wiki.xen.org/wiki/Main_Page The wiki at xen.org ]</div>Ephreal