https://wiki.archlinux.org/api.php?action=feedcontributions&user=JudgeManganese&feedformat=atomArchWiki - User contributions [en]2024-03-29T15:44:25ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=InfiniBand&diff=797694InfiniBand2024-01-20T20:42:33Z<p>JudgeManganese: Update Soft RoCE section to use currently supported command rdma instead of obsoleted rxe_cfg. See https://github.com/linux-rdma/rdma-core/commit/0d2ff0e1502ebc63346bc9ffd37deb3c4fd0dbc9</p>
<hr />
<div>[[Category:Networking]]<br />
[[ja:InfiniBand]]<br />
This page explains how to set up, diagnose, and benchmark [[Wikipedia:InfiniBand|InfiniBand]] networks.<br />
<br />
== Introduction ==<br />
<br />
=== Overview ===<br />
<br />
InfiniBand (abbreviated IB) is an alternative to Ethernet and Fibre Channel. IB provides high bandwidth and low latency. IB can transfer data directly to and from a storage device on one machine to userspace on another machine, bypassing and avoiding the overhead of a system call. IB adapters can handle the networking protocols, unlike Ethernet networking protocols which are ran on the CPU. This allows the OS's and CPU's to remain free while the high bandwidth transfers take place, which can be a real problem with 10Gb+ Ethernet.<br />
<br />
IB hardware is made by Mellanox (which merged with Voltaire, and is heavily backed by Oracle) and Intel (which acquired QLogic's IB division in 2012). IB is most often used by supercomputers, clusters, and data centers. IBM, HP, and Cray are also members of the InfiniBand Steering Committee. Facebook, Twitter, eBay, YouTube, and PayPal are examples of IB users.<br />
<br />
IB software is developed under the [https://www.openfabrics.org/ OpenFabrics Open Source Alliance]<br />
<br />
=== Affordable used equipment ===<br />
<br />
With large businesses benefiting so much from jumping to newer versions, the maximum length limitations of passive IB cabling, the high cost of active IB cabling, and the more technically complex setup than Ethernet, the used IB market is heavily saturated, allowing used IB devices to affordably be used at home or smaller businesses for their internal networks.<br />
<br />
=== Bandwidth ===<br />
<br />
==== Signal transfer rates ====<br />
<br />
IB transfer rates corresponded in the beginning to the maximum supported by PCI Express (abbreviated PCIe), later on, as PCIe made less progress, the transfer rates corresponded to other I/O technologies and the number of PCIe lanes per port was increased instead. It launched using SDR (Single Data Rate) with a signaling rate of 2.5Gb/s per lane (corresponding with PCI Express v1.0), and has added: DDR (Double Data Rate) at 5Gb/s (PCI Express v2.0); QDR (Quad Data Rate) at 10Gb/s (matching the throughput of PCI Express 3.0 with improved coding of PCIe 3.0 instead of the signaling rate); and FDR (Fourteen Data Rate) at 14.0625Gbps (matching 16GFC Fibre Channel). IB is now delivering EDR (Enhanced Data Rate) at 25Gb/s (matching 25Gb Ethernet). Planned around 2017 will be HDR (High Data Rate) at 50Gb/s.<br />
<br />
==== Effective throughput ====<br />
<br />
Because SDR, DDR, and QDR versions use 8/10 encoding (8 bits of data takes 10 bits of signaling), effective throughput for these is lowered to 80%: SDR at 2Gb/s/link; DDR at 4Gb/s/link; and QDR at 8Gb/s/link. Starting with FDR, IB uses 64/66 encoding, allowing a higher effective throughput to signaling rate ratio of 96.97%: FDR at 13.64Gb/s/link; EDR at 24.24Gb/s/lane; and HDR at 48.48Gb/s/link.<br />
<br />
IB devices are capable of sending data over multiple links, though commercial products standardized around 4 links per cable. <br />
<br />
When using the common 4X link devices, this effectively allows total effective throughputs of: SDR of 8Gb/s; DDR of 16Gb/s; QDR of 32Gb/s; FDR of 54.54Gb/s; EDR of 96.97Gb/s; and HDR of 193.94Gb/s.<br />
<br />
=== Latency ===<br />
<br />
IB's latency is incredibly small: SDR (5us); DDR (2.5us); QDR (1.3us); FDR (0.7us); EDR (0.5us); and HDR (< 0.5us). For comparison 10Gb Ethernet is more like 7.22us, ten times more than FDR's latency.<br />
<br />
=== Backwards compatibility ===<br />
<br />
IB devices are almost always backwards compatible. Connections should be established at the lowest common denominator. A DDR adapter meant for a PCI Express 8x slot should work in a PCI Express 4x slot (with half the bandwidth).<br />
<br />
=== Cables ===<br />
<br />
IB passive copper cables can be up to 7 meters using up to QDR, and 3 meters using FDR.<br />
<br />
IB active fiber (optical) cables can be up to 300 meters using up to FDR (only 100 meters on FDR10).<br />
<br />
Mellanox MetroX devices exist which allow up to 80 kilometer connections. Latency increases by about 5us per kilometer.<br />
<br />
An IB cable can be used to directly link two computers without a switch; IB cross-over cables do not exist.<br />
<br />
== Terminology ==<br />
<br />
=== Hardware ===<br />
<br />
Adapters, switches, routers, and bridges/gateways must be specifically made for IB.<br />
<br />
; HCA (Host Channel Adapter): Like an Ethernet NIC (Network Interface Card). Connects the IB cable to the PCI Express bus, at the full speed of the bus if the proper generation of HCA is used. An end node on an IB network, executes transport-level functions, and supports the IB verbs interface.<br />
; Switch: Like an Ethernet NIC. Moves packets from one link to another on the same IB subnet.<br />
; Router: Like an Ethernet router. Moves packets between different IB subnets.<br />
; Bridge/Gateway: A standalone piece of hardware, or a computer performing this function. Bridges IB and Ethernet networks.<br />
<br />
=== GUID ===<br />
<br />
Like Ethernet MAC addresses, but a device has multiple GUID's. Assigned by the hardware manufacturer, and remains the same through reboots. 64-bit addresses (24-bit manufacturer prefix and 40-bit device identifier). Given to adapters, switches, routers, and bridges/gateways.<br />
<br />
; Node GUID: Identifies the HCA, Switch, or Router<br />
; Port GUID: Identifies a port on a HCA, Switch, or Router (even a HCA often has multiple ports)<br />
; System GUID: Allows treating multiple GUIDs as one entity<br />
; LID (Local IDentifier): 16-bit addresses, assigned by the Subnet Manager when picked up by the Subnet Manager. Used for routing packets. Not persistent through reboots.<br />
<br />
=== Network Management ===<br />
<br />
; SM (Subnet Manager): Actively manages an IB subnet. Can be implemented as a software program on a computer connected to the IB network, built in to an IB switch, or as a specialized IB device. Initializes and configures everything else on the subnet, including assigning LIDs (Local IDentifiers). Establishes traffic paths through the subnet. Isolates faults. Prevents unauthorized Subnet Managers. You can have multiple switches all on one subnet, under one Subnet Manager. You can have redundant Subnet Managers on one subnet, but only one can be active at a time.<br />
; MAD (MAnagement Datagram): Standard message format for subnet manager to and from IB device communication, carried by a UD (Unreliable Datagram).<br />
; UD (Unreliable Datagram):<br />
<br />
== Installation ==<br />
<br />
First install {{Pkg|rdma-core}} which contains all core libraries and daemons.<br />
<br />
=== Upgrade firmware ===<br />
<br />
Running the most recent firmware can give significant performance increases, and fix connectivity issues.<br />
<br />
{{Warning|Be careful or the device may be bricked!}}<br />
<br />
==== For Mellanox ====<br />
<br />
* Install {{AUR|mstflint}}<br />
* Determine your adapter's PCI device ID (in this example, "05:00.0" is the adapter's PCI device ID)<br />
{{hc|$ lspci {{!}} grep Mellanox|'''05:00.0''' InfiniBand: Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)}}<br />
* Determine what firmware version your adapter has, and your adapter's PSID (more specific than just a model number - specific to a compatible set of revisions)<br />
{{hc|# mstflint -d <adapter PCI device ID> query|...<br />
FW Version: '''2.7.1000'''<br />
...<br />
PSID: '''MT_04A0110002'''}}<br />
* Check latest firmware version<br />
** Visit [https://www.mellanox.com/page/firmware_download Mellanox's firmware download page] (this guide incorporates this link's "firmware burning instructions", using its mstflint option)<br />
** Choose the category of device you have<br />
** Locate your device's PSID on their list, that mstflint gave you<br />
** Examine the Firmware Image filename to see if it is more recent than your adapter's FW Version, i.e. {{ic|fw-25408-2_9_1000-MHGH28-XTC_A1.bin.zip}}, is version {{ic|2.9.1000}}<br />
* If there is a more recent version, download new firmware and burn it to your adapter<br />
$ unzip <''firmware .bin.zip file name''><br />
# mstflint -d <''adapter PCI device ID''> -i <''firmware .bin file name''> burn<br />
<br />
==== For Intel/QLogic ====<br />
<br />
Search for the model number (or a substring) over at [https://downloadcenter.intel.com/ Intel Download Center] and follow the instructions. The downloaded software will probably need to be run from RHEL/CentOS or SUSE/OpenSUSE.<br />
<br />
=== Kernel modules ===<br />
<br />
Edit {{ic|/etc/rdma/modules/rdma.conf}} and {{ic|/etc/rdma/modules/infiniband.conf}} to your liking. Then load the kernel modules written in these files such as {{ic|ib_ipoib}}, or just reboot the system. (Although there should be no need to do this, [[start]] and [[enable]] both {{ic|rdma-load-modules@rdma.service}} and {{ic|rdma-load-modules@infiniband.service}} if kernel modules are not loaded correctly. Rebooting the system will be fine).<br />
<br />
{{note|1=[https://bugzilla.redhat.com/show_bug.cgi?id=965829 Due to how the kernel stacks are handled], changes to {{ic|/etc/rdma/modules/rdma.conf}} only take effect max once every boot, when {{ic|rdma-load-modules@*.service}} is started for first time. Restarting {{ic|rdma-load-modules@*.service}} has no effect.}}<br />
<br />
=== Subnet manager ===<br />
<br />
Each IB network requires at least one subnet manager. Without one, devices may show having a link, but will never change state from {{ic|Initializing}} to {{ic|Active}}. A subnet manager often (typically every 5 or 30 seconds) checks the network for new adapters and adds them to the routing tables. If you have an IB switch with an embedded subnet manager, you can use that, or you can keep it disabled and use a software subnet manager instead. Dedicated IB subnet manager devices also exist.<br />
<br />
=== Enable port ===<br />
<br />
If the port is in the physical state {{ic|Sleep}} (can be verified with {{ic|ibstat}}) then it first needs to be enabled by running {{ic|ibportstate --Direct 0 1 enable}} for it to wake up. This may need to be automated at boot if the ports at both ends of the link are sleeping.<br />
<br />
==== Software subnet manager ====<br />
<br />
On one system:<br />
<br />
* Install {{AUR|opensm}}<br />
* Correct the systemd file {{ic|/usr/lib/systemd/system/opensm.service}} as the following instruction.<br />
* [[Start]] and [[enable]] {{ic|opensm.service}}<br />
<br />
The current opensm's configuration for opensm is not compatible with RDMA's systemd configuration. That is, edit the 2 lines in {{ic|/usr/lib/systemd/system/opensm.service}} as following (Commented ones are original contents.).<br />
<br />
{{hc|# /usr/lib/systemd/system/opensm.service|Requires{{=}}rdma-load-modules@rdma.service # Requires{{=}}rdma.service<br />
After{{=}}rdma-load-modules@rdma.service # After{{=}}rdma.service}}<br />
<br />
All of your connected IB ports should now be in a (port) state of {{ic|Active}}, and a physical state of {{ic|LinkUp}}. You can check this by running [[#ibstat - View a computer's IB GUIDs|ibstat]].<br />
{{hc|$ ibstat|... (look at the ports shown you expect to be connected)<br />
State: Active<br />
Physical state: LinkUp<br />
...}}<br />
Or by examining the {{ic|/sys}} filesystem:<br />
{{hc|$ cat /sys/class/infiniband/''kernel_module''/ports/''port_number''/phys_state|5: LinkUp}}<br />
{{hc|$ cat /sys/class/infiniband/''kernel_module''/ports/''port_number''/state|4: ACTIVE}}<br />
<br />
== TCP/IP (IPoIB) ==<br />
<br />
You can create a virtual Ethernet Adapter that runs on the HCA. This is intended so programs designed to work with TCP/IP but not IB, can (indirectly) use IB networks. Performance is negatively affected due to sending all traffic through the normal TCP stack; requiring system calls, memory copies, and network protocols to run on the CPU rather than on the HCA.<br />
<br />
IB interface will appear when the module {{ic|ib_ipoib}} is loaded. The simple configuration to make it appear is adding the line {{ic|ib_ipoib}} in {{ic|/etc/rdma/modules/infiniband.conf}} then rebooting the system. After booting the system with the module {{ic|ib_ipoib}}, links with the name like {{ic|ibp16s0}} should be confirmed with the command {{ic|ip link}}.<br />
<br />
Detailed configuration is possible for the IB interface (e.g. naming it {{ic|ib0}} and assigning IP addresses [[Network configuration|like a traditional Ethernet adapter]]).<br />
<br />
=== Connection mode ===<br />
<br />
IPoIB can run in datagram (default) or connected mode. Connected mode [[#Finetuning connection mode and MTU|allows you to set a higher MTU]], but does increase TCP latency for short messages by about 5% more than datagram mode.<br />
<br />
To see the current mode used:<br />
<br />
$ cat /sys/class/net/''interface''/mode<br />
<br />
=== MTU ===<br />
<br />
In datagram mode, UD (Unreliable Datagram) transport is used, which typically forces the MTU to be 2044 bytes. Technically to the IB L2 MTU - 4 bytes for the IPoIB encapsulation header, which is usually 2044 bytes.<br />
<br />
In connected mode, RC (Reliable Connected) transport is used, which allows a MTU up to the maximum IP packet size, 65520 bytes.<br />
<br />
To see your MTU:<br />
<br />
$ ip link show ''interface''<br />
<br />
=== Finetuning connection mode and MTU ===<br />
<br />
You only need {{ic|ipoibmodemtu}} if you want to change the default connection mode and/or MTU.<br />
<br />
* [[#TCP/IP (IPoIB)|Install and set up TCP/IP over IB (IPoIB)]]<br />
* Install {{AUR|ipoibmodemtu}}<br />
* Configure {{ic|ipoibmodemtu}} through {{ic|/etc/ipoibmodemtu.conf}}, which contains instructions on how to do so<br />
** It defaults to setting a single IB port {{ic|ib0}} to {{ic|connected}} mode and MTU {{ic|65520}}<br />
* [[Start]] and [[enable]] {{ic|ipoibmodemtu.service}}<br />
<br />
Different setups will see different results. Some people see a gigantic (double+) speed increase by using {{ic|connected}} mode and MTU {{ic|65520}}, and a few see about the same or even worse speeds. Use [[#qperf - Measure performance over RDMA or TCP/IP|qperf]] and [[#iperf - Measure performance over TCP/IP|iperf]] to finetune your system.<br />
<br />
Using the [[#qperf - Measure performance over RDMA or TCP/IP|qperf]] examples given in this article, here are example results from an SDR network (8 theoretical Gb/s) with various finetuning:<br />
{| class="wikitable"<br />
! Mode !! MTU !! MB/s !! us latency<br />
|-<br />
| datagram || 2044 || 707 || 19.4<br />
|-<br />
| connected || 2044 || 353 || 18.9<br />
|-<br />
| connected || 65520 || 726 || 19.6<br />
|}<br />
<br />
{{Tip|Use the same connection and MTU settings for the entire subnet. Mixing and matching does not work optimally.}}<br />
<br />
== Soft RoCE (RXE) ==<br />
<br />
Soft ROCE is a software implementation of RoCE that allows using Infiniband over any ethernet adapter.<br />
<br />
* Install {{pkg|iproute2}}<br />
* Run {{ic|rdma link add rxe_eth0 type rxe netdev ethN}} to configure an RXE instance on ethernet device ethN.<br />
<br />
You should now have an rxe0 device:<br />
{{hc|# rdma link|<br />
link rxe_eth0/1 state ACTIVE physical_state LINK_UP netdev enp1s0}}<br />
<br />
== Remote data storage ==<br />
<br />
You can share physical or virtual devices from a target (host/server) to an initiator (guest/client) system over an IB network, using iSCSI, iSCSI with iSER, or SRP. These methods differ from traditional file sharing (i.e. [[Samba]] or [[NFS]]) because the initiator system views the shared device as its own block level device, rather than a traditionally mounted network shared folder. i.e. {{ic|fdisk /dev/''block_device_id''}}, {{ic|mkfs.btrfs /dev/''block_device_id_with_partition_number''}}<br />
<br />
The disadvantage is only one system can use each shared device at a time; trying to mount a shared device on the target or another initiator system will fail (an initiator system can certainly run traditional file sharing on top).<br />
<br />
The advantages are faster bandwidth, more control, and even having an initiator's root filesystem being physically located remotely (remote booting).<br />
<br />
=== targetcli ===<br />
<br />
{{ic|targetcli}} acts like a shell that presents its complex (and not worth creating by hand) {{ic|/etc/target/saveconfig.json}} as a pseudo-filesystem.<br />
<br />
==== Installing and using ====<br />
<br />
On the target system:<br />
* Install {{AUR|targetcli-fb}}<br />
* [[Start]] and [[enable]] {{ic|target.service}}<br />
<br />
In {{ic|targetcli}}:<br />
* In any pseudo-directory, you can run {{ic|help}} to see the commands available ''in that pseudo-directory'' or {{ic|help ''command''}} (like {{ic|help create}}) for more detailed help<br />
* Tab-completion is also available for many commands<br />
* Run {{ic|ls}} to see the entire pseudo-filesystem at and below the current pseudo-directory<br />
<br />
==== Create backstores ====<br />
<br />
Enter the configuration shell:<br />
<br />
# targetcli<br />
<br />
Within {{ic|targetcli}}, setup a backstore for each device or virtual device to share:<br />
* To share an actual block device, run: {{ic|cd /backstores/block}}; and {{ic|create ''name'' ''dev''}}<br />
* To share a file as a virtual block device, run: {{ic|cd /backstores/fileio}}; and {{ic|create ''name'' ''file''}}<br />
* To share a physical SCSI device as a pass-through, run: {{ic|cd /backstores/pscsi}}; and {{ic|create ''name'' ''dev''}}<br />
* To share a RAM disk, run: {{ic|cd /backstores/ramdisk}}; and {{ic|create ''name'' ''size''}}<br />
* Where ''name'' is for the backstore's name<br />
* Where ''dev'' is the block device to share (i.e. {{ic|/dev/sda}}, {{ic|/dev/sda4}}, {{ic|/dev/disk/by-id/''XXX''}}, or a LVM logical volume {{ic|/dev/vg0/lv1}})<br />
* Where ''file'' is the file to share (i.e. {{ic|/path/to/file}})<br />
* Where ''size'' is the size of the RAM disk to create (i.e. 512MB, 20GB)<br />
<br />
=== iSCSI ===<br />
<br />
iSCSI allows storage devices and virtual storage devices to be used over a network. For IB networks, the storage can either work over IPoIB or iSER.<br />
<br />
There is a lot of overlap with the [[iSCSI Target]], [[iSCSI Initiator]], and [[iSCSI Boot]] articles, but the necessities will be discussed since much needs to be customized for usage over IB.<br />
<br />
==== Over IPoIB ====<br />
<br />
Perform the target system instructions first, which will direct you when to temporarily switch over to the initiator system instructions.<br />
<br />
* On the target and initiator systems, [[#TCP/IP (IPoIB)|install TCP/IP over IB]]<br />
<br />
* On the target system, for each device or virtual device you want to share, in {{ic|targetcli}}:<br />
** [[#Create backstores|Create a backstore]]<br />
** For each backstore, create an IQN (iSCSI Qualified Name) (the name other systems' configurations will see the storage as)<br />
*** Run: {{ic|cd /iscsi}}; and {{ic|create}}. It will give you a ''randomly_generated_target_name'', i.e. {{ic|iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.3d74b8d4020a}}<br />
*** Set up the TPG (Target Portal Group), automatically created in the last step as tpg1<br />
**** Create a lun (Logical Unit Number)<br />
***** Run: {{ic|cd ''randomly_generated_target_name''/tpg1/luns}}; and {{ic|create ''storage_object''}}. Where {{ic|''storage_object''}} is a full path to an existing storage object, i.e. {{ic|/backstores/block/''name''}}<br />
**** Create an acl (Access Control List)<br />
***** Run: {{ic|cd ../acls}}; and {{ic|create ''wwn''}}, where {{ic|''wwn''}} is the initiator system's IQN (iSCSI Qualified Name), aka its (World Wide Name)<br />
****** Get the {{ic|''wwn''}} by running on the initiator system, '''not''' this target system: (after installing on it {{pkg|open-iscsi}}) {{ic|cat /etc/iscsi/initiatorname.iscsi}}<br />
** Save and exit by running: {{ic|cd /}}; {{ic|saveconfig}}; and {{ic|exit}}<br />
<br />
* On the initiator system:<br />
** Install {{pkg|open-iscsi}}<br />
** At this point, you can obtain this initiator system's IQN (iSCSI Qualified Name), aka its wwn (World Wide Name), for setting up the target system's {{ic|luns}}:<br />
*** {{ic|pacman}} should have displayed {{ic|>>> Setting Initiatorname ''wwn''}}<br />
*** Otherwise, run: {{ic|cat /etc/iscsi/initiatorname.iscsi}} to see {{ic|1=InitiatorName=''wwn''}}<br />
** [[Start]] and [[enable]] {{ic|iscsid.service}}<br />
** To automatically login to discovered targets at boot, before discovering targets, edit {{ic|/etc/iscsi/iscsid.conf}} to set {{ic|1=node.startup = automatic}}<br />
** Discover online targets. Run {{ic|iscsiadm -m discovery -t sendtargets -p ''portal''}} as root, where ''portal'' is an IP (v4 or v6) address or hostname<br />
*** If using a hostname, make sure it routes to the IB IP address rather than Ethernet - it may be beneficial to just use the IB IP address<br />
** To automatically login to discovered targets at boot, [[Start]] and [[enable]] {{ic|iscsi.service}}<br />
** To manually login to discovered targets, run {{ic|iscsiadm -m node -L all}} as root.<br />
** View which block device ID was given to each target logged into. Run {{ic|iscsiadm -m session -P 3 {{!}} grep Attached}} as root. The block device ID will be the last line in the tree for each target ({{ic|-P}} is the print command, its option is the verbosity level, and only level 3 lists the block device IDs)<br />
<br />
==== Over iSER ====<br />
<br />
iSER (iSCSI Extensions for RDMA) takes advantage of IB's RDMA protocols, rather than using TCP/IP. It eliminates TCP/IP overhead, and provides higher bandwidth, zero copy time, lower latency, and lower CPU utilization.<br />
<br />
Follow the [[#Over IPoIB|iSCSI Over IPoIB]] instructions, with the following changes:<br />
<br />
* If you wish, instead of [[#TCP/IP (IPoIB)|installing IPoIB]], you can just [[#Kernel modules|install RDMA for loading kernel modules]]<br />
* On the target system, after everything else is setup, while still in {{ic|targetcli}}, enable iSER on the target:<br />
** Run {{ic|cd /iscsi/''iqn''/tpg1/portals/0.0.0.0:3260}} for each ''iqn'' you want to have use iSER rather than IPoIB<br />
*** Where ''iqn'' is the randomly generated target name, i.e. {{ic|iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.3d74b8d4020a}}<br />
** Run {{ic|enable_iser true}}<br />
** Save and exit by running: {{ic|cd /}}; {{ic|saveconfig}}; and {{ic|exit}}<br />
* On the initiator system, when running {{ic|iscsiadm}} to discover online targets, use the additional argument {{ic|-I iser}}, and when you login to them, you should see: {{ic|Logging in to [iface: iser...}}<br />
<br />
==== Adding to /etc/fstab ====<br />
<br />
The last time you discovered targets, automatic login must have been turned on.<br />
<br />
Add your mount entry to {{ic|/etc/fstab}} as if it were a local block device, except add a {{ic|_netdev}} option to avoid attempting to mount it before network initialization.<br />
<br />
== Network segmentation ==<br />
<br />
{{Expansion|Explain in more detail with examples}}<br />
<br />
An IB subnet can be partitioned for different customers or applications, giving security and quality of service guarantees. Each partition is identified by a PKEY (Partition Key).<br />
<br />
== SDP (Sockets Direct Protocol) ==<br />
<br />
Use {{ic|librdmacm}} (successor to rsockets and libspd) and {{ic|LD_PRELOAD}} to intercept non-IB programs' socket calls, and transparently (to the program) send them over IB via RDMA. Dramatically speeding up programs built for TCP/IP, much more than can be achieved by using IPoIB. It avoids the need to change the program's source code to work with IB and can even be used for closed source programs. It does not work for programs that statically link in socket libraries.<br />
<br />
== Diagnosing and benchmarking ==<br />
<br />
All IB specific tools are included in {{Pkg|rdma-core}} and {{AUR|ibutils}}.<br />
<br />
=== ibstat - View a computer's IB GUIDs ===<br />
<br />
ibstat will show you detailed information about each IB adapter in the computer it is ran on, including: model number; number of ports; firmware and hardware version; node, system image, and port GUIDs; and port state, physical state, rate, base lid, lmc, SM lid, capability mask, and link layer.<br />
<br />
{{hc|$ ibstat|CA 'mlx4_0'<br />
CA type: MT25418<br />
Number of ports: 2<br />
Firmware version: 2.9.1000<br />
Hardware version: a0<br />
Node GUID: 0x0002c90300002f78<br />
System image GUID: 0x0002c90300002f7b<br />
Port 1:<br />
State: Active<br />
Physical state: LinkUp<br />
Rate: 20<br />
Base lid: 3<br />
LMC: 0<br />
SM lid: 3<br />
Capability mask: 0x0251086a<br />
Port GUID: 0x0002c90300002f79<br />
Link layer: InfiniBand<br />
Port 2:<br />
State: Down<br />
Physical state: Polling<br />
Rate: 10<br />
Base lid: 0<br />
LMC: 0<br />
SM lid: 0<br />
Capability mask: 0x02510868<br />
Port GUID: 0x0002c90300002f7a<br />
Link layer: InfiniBand}}<br />
<br />
This example shows a Mellanox Technologies (MT) adapter. Its PCI Device ID is reported (25418), rather than the model number of part number. It shows a state of "Active", which means is it properly connected to a subnet manager. It shows a physical state of "LinkUp", which means it has an electrical connection via cable, but is not necessarily properly connected to a subnet manager. It shows a total rate of 20 Gb/s (which for this card is from a 5.0 Gb/s signaling rate and 4 virtual lanes). It shows the subnet manager assigned the port a lid of 3.<br />
<br />
=== ibhosts - View all hosts on IB network ===<br />
<br />
ibhosts will show you the Node GUIDs, number of ports, and device names, for each host on the IB network.<br />
<br />
{{hc|# ibhosts|<br />
Ca : 0x0002c90300002778 ports 2 "MT25408 ConnectX Mellanox Technologies"<br />
Ca : 0x0002c90300002f78 ports 2 "hostname mlx4_0"<br />
}}<br />
<br />
=== ibswitches - View all switches on IB network ===<br />
<br />
ibswitches will show you the Node GUIDs, number of ports, and device names, for each switch on the IB network. If you are running with direct connections only, it will show nothing.<br />
<br />
# ibswitches<br />
<br />
=== iblinkinfo - View link information on IB network ===<br />
<br />
iblinkinfo will show you the device names, Port GUIDs, number of virtual lanes, [[#Signal transfer rates|signal transfer rates]], state, physical state, and what it is connected to.<br />
<br />
{{hc|# iblinkinfo|2=<br />
CA: MT25408 ConnectX Mellanox Technologies:<br />
0x0002c90300002779 4 1[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 3 1[ ] "kvm mlx4_0" ( )<br />
CA: hostname mlx4_0:<br />
0x0002c90300002f79 3 1[ ] ==( 4X 5.0 Gbps Active/ LinkUp)==> 4 1[ ] "MT25408 ConnectX Mellanox Technologies" ( )<br />
}}<br />
<br />
This example shows two adapters directly connected with out a switch, using a 5.0 Gb/s [[#Signal transfer rates|signal transfer rate]], and 4 virtual lanes (4X).<br />
<br />
=== ibping - Ping another IB device ===<br />
<br />
ibping will attempt pinging another IB GUID. ibping must be ran in server mode on one computer, and in client mode on another.<br />
<br />
ibping must be ran in server mode on one computer.<br />
<br />
# ibping -S<br />
<br />
And in client mode on another. It is pinging a specific port, so it cannot take a CA name, or a Node or System GUID. It requires {{ic|-G}} with a Port GUID, or {{ic|-L}} with a Lid.<br />
<br />
{{hc|# ibping -G 0x0002c90300002779<br />
-or-<br />
# ibping -L 1|2=<br />
Pong from hostname.(none) (Lid 1): time 0.053 ms<br />
Pong from hostname.(none) (Lid 1): time 0.074 ms<br />
^C<br />
--- hostname.(none) (Lid 4) ibping statistics ---<br />
2 packets transmitted, 2 received, 0% packet loss, time 1630 ms<br />
rtt min/avg/max = 0.053/0.063/0.074 ms}}<br />
<br />
If you are running IPoIB, you can use regular {{ic|ping}} which pings through the TCP/IP stack. ibping uses IB interfaces, and does not use the TCP/IP stack.<br />
<br />
=== ibdiagnet - Show diagnostic information for entire subnet ===<br />
<br />
ibdiagnet will show you potential problems on your subnet. You can run it without options. {{ic|-lw <1x{{!}}4x{{!}}12x>}} specifies the expected link width (number of virtual lanes) for your computer's adapter, so it can check if it is running as intended. {{ic|-ls <2.5{{!}}5{{!}}10>}} specifies the expected link speed (signaling rate) for your computer's adapter, so it can check if it is running as intended, but it does not yet support options faster than 10 for FDR+ devices. {{ic|-c <count>}} overrides the default number of packets to be sent of 10.<br />
<br />
{{hc|# ibdiagnet -lw 4x -ls 5 -c 1000|<nowiki><br />
Loading IBDIAGNET from: /usr/lib/ibdiagnet1.5.7<br />
-W- Topology file is not specified.<br />
Reports regarding cluster links will use direct routes.<br />
Loading IBDM from: /usr/lib/ibdm1.5.7<br />
-I- Using port 1 as the local port.<br />
-I- Discovering ... 2 nodes (0 Switches & 2 CA-s) discovered.<br />
<br />
-I---------------------------------------------------<br />
-I- Bad Guids/LIDs Info<br />
-I---------------------------------------------------<br />
-I- No bad Guids were found<br />
<br />
-I---------------------------------------------------<br />
-I- Links With Logical State = INIT<br />
-I---------------------------------------------------<br />
-I- No bad Links (with logical state = INIT) were found<br />
<br />
-I---------------------------------------------------<br />
-I- General Device Info<br />
-I---------------------------------------------------<br />
<br />
-I---------------------------------------------------<br />
-I- PM Counters Info<br />
-I---------------------------------------------------<br />
-I- No illegal PM counters values were found<br />
<br />
-I---------------------------------------------------<br />
-I- Links With links width != 4x (as set by -lw option)<br />
-I---------------------------------------------------<br />
-I- No unmatched Links (with width != 4x) were found<br />
<br />
-I---------------------------------------------------<br />
-I- Links With links speed != 5 (as set by -ls option)<br />
-I---------------------------------------------------<br />
-I- No unmatched Links (with speed != 5) were found<br />
<br />
-I---------------------------------------------------<br />
-I- Fabric Partitions Report (see ibdiagnet.pkey for a full hosts list)<br />
-I---------------------------------------------------<br />
-I- PKey:0x7fff Hosts:2 full:2 limited:0<br />
<br />
-I---------------------------------------------------<br />
-I- IPoIB Subnets Check<br />
-I---------------------------------------------------<br />
-I- Subnet: IPv4 PKey:0x7fff QKey:0x00000b1b MTU:2048Byte rate:10Gbps SL:0x00<br />
-W- Suboptimal rate for group. Lowest member rate:20Gbps > group-rate:10Gbps<br />
<br />
-I---------------------------------------------------<br />
-I- Bad Links Info<br />
-I- No bad link were found<br />
-I---------------------------------------------------<br />
----------------------------------------------------------------<br />
-I- Stages Status Report:<br />
STAGE Errors Warnings<br />
Bad GUIDs/LIDs Check 0 0 <br />
Link State Active Check 0 0 <br />
General Devices Info Report 0 0 <br />
Performance Counters Report 0 0 <br />
Specific Link Width Check 0 0 <br />
Specific Link Speed Check 0 0 <br />
Partitions Check 0 0 <br />
IPoIB Subnets Check 0 1 <br />
<br />
Please see /tmp/ibdiagnet.log for complete log<br />
----------------------------------------------------------------<br />
<br />
-I- Done. Run time was 0 seconds.<br />
</nowiki>}}<br />
<br />
=== qperf - Measure performance over RDMA or TCP/IP ===<br />
<br />
qperf can measure bandwidth and latency over RDMA (SDP, UDP, UD, and UC) or TCP/IP (including IPoIB)<br />
<br />
qperf must be ran in server mode on one computer.<br />
<br />
$ qperf<br />
<br />
And in client mode on another. SERVERNODE can be a hostname, or for IPoIB a TCP/IP address. There are many tests. Some of the most useful are below.<br />
<br />
$ qperf SERVERNODE [OPTIONS] TESTS<br />
<br />
==== TCP/IP over IPoIB ====<br />
<br />
{{hc|$ qperf 192.168.2.2 tcp_bw tcp_lat|2=<br />
tcp_bw:<br />
bw = 701 MB/sec<br />
tcp_lat:<br />
latency = 19.8 us<br />
}}<br />
<br />
=== iperf - Measure performance over TCP/IP ===<br />
<br />
iperf is not an IB aware program, and is meant to test over TCP/IP or UDP. Even though [[#qperf - Measure performance over RDMA or TCP/IP|qperf]] can test your IB TCP/IP performace using IPoIB, iperf is still another program you can use.<br />
<br />
iperf must be ran in server mode on one computer.<br />
<br />
$ iperf3 -s<br />
<br />
And in client mode on another.<br />
<br />
{{hc|$ iperf3 -c 192.168.2.2|<br />
[ 4] local 192.168.2.1 port 20139 connected to 192.168.2.2 port 5201<br />
[ ID] Interval Transfer Bandwidth<br />
[ 4] 0.00-1.00 sec 639 MBytes 5.36 Gbits/sec <br />
...<br />
[ 4] 9.00-10.00 sec 638 MBytes 5.35 Gbits/sec <br />
- - - - - - - - - - - - - - - - - - - - - - - - -<br />
[ ID] Interval Transfer Bandwidth<br />
[ 4] 0.00-10.00 sec 6.23 GBytes 5.35 Gbits/sec sender<br />
[ 4] 0.00-10.00 sec 6.23 GBytes 5.35 Gbits/sec receiver<br />
<br />
iperf Done.}}<br />
<br />
iperf shows Transfer in base 10 GB's, and Bandwidth in base 2 GB's. So, this example shows 6.23GB (base 10) in 10 seconds. That is 6.69GB (base 2) in 10 seconds. (6.23 * 2^30 / 10^9) That's 5.35 Gb/s (base 2), as shown by iperf. (6.23 * 2^30 / 10^9 * 8 / 10) That is 685 MB/s (base 2), which is roughly the speed that qperf reported. (6.23 * 2^30 / 10^9 * 8 / 10 * 1024 / 8)<br />
<br />
== Common problems / FAQ ==<br />
<br />
=== Connection problems ===<br />
<br />
==== Link, physical state and port state ====<br />
<br />
* See if the IB hardware modules are recognized by the system. If you have an Intel adapter, you will have to use Intel here and look through a few lines if you have other Intel hardware:<br />
{{hc|# dmesg {{!}} grep -Ei "Mellanox{{!}}InfiniBand{{!}}QLogic{{!}}Voltaire"|<br />
[ 6.287556] mlx4_core: Mellanox ConnectX core driver v2.2-1 (Feb, 2014)<br />
[ 8.686257] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v2.2-1 (Feb 2014)<br />
}}<br />
{{hc|$ ls -l /sys/class/infiniband|mlx4_0 -> ../../devices/pci0000:00/0000:00:03.0/0000:05:00.0/infiniband/mlx4_0}}<br />
If nothing is shown, your kernel is not recognizing your adapter. This example shows approximately what you will see if you have a Mellanox ConnectX adapter, which uses the mlx4_0 kernel module.<br />
<br />
* Check the port and physical states. Either run [[#ibstat - View a computer's IB GUIDs|ibstat]] or examine {{ic|/sys}}.<br />
{{hc|$ ibstat|<br />
(look at the port shown that you expect to be connected)}}<br />
or<br />
{{hc|$ cat /sys/class/infiniband/<kernel module>/ports/<port number>/phys_state|<br />
5: LinkUp}}<br />
{{hc|$ cat /sys/class/infiniband/<kernel module>/ports/<port number>/state|<br />
4: ACTIVE}}<br />
The physical state should be "LinkUp". If it is not, your cable likely is not plugged in, is not connected to anything on the other end, or is defective. The (port) state should be "Active". If it is "Initializing" or "INIT", your [[#Subnet manager|subnet manager]] does not exist, is not running, or has not added the port to the network's routing tables.<br />
<br />
* Can you successfully [[#ibping - Ping another IB device|ibping]] which uses IB directly, rather than IPoIB? Can you successfully {{ic|ping}}, if you are running IPoIB?<br />
<br />
* Consider [[#Upgrade firmware|upgrading firmware]].<br />
<br />
==== getaddrinfo failed: Name or service not known ====<br />
<br />
* Run [[#ibhosts - View all hosts on IB network|ibhosts]] to see the CA names at the end of each line in quotes.<br />
<br />
=== Speed problems ===<br />
<br />
* Start by double-checking your expectations.<br />
How have you determined you have a speed problem? Are you using [[#qperf - Measure performance over RDMA or TCP/IP|qperf]] or [[#iperf - Measure performance over TCP/IP|iperf]], which both transmit data to and from memory rather than hard drives. Or, are you benchmarking actual file transfers, which relies on your hard drives? Unless you are running RAID to boost speed, even with the fastest SSD's available in mid 2015, a single hard drive (or sometimes even multiple ones) will be bottlenecking your IB transfer speeds. Are you using RDMA or TCP/IP via IPoIB? If so, [[#TCP/IP (IPoIB)|there is a performance hit]] for using IPoIB instead of RDMA.<br />
<br />
* Check your link speeds. Run [[#ibstat - View a computer's IB GUIDs|ibstat]], [[#iblinkinfo - View link information on IB network|iblinkinfo]], or examine {{ic|/sys}}.<br />
{{hc|$ ibstat|<br />
(look at the Rate shown on the port you are using)}}<br />
or<br />
{{hc|# iblinkinfo|<br />
(look at the middle part formatted like "4X 5.0 Gbps")}}<br />
or<br />
{{hc|$ cat /sys/class/infiniband/<kernel module>/ports/<port number>/rate|<br />
20 Gb/sec (4X DDR)}}<br />
Does this match your expected [[#Bandwidth|bandwidth and number of virtual lanes]]?<br />
<br />
* Check diagnostic information for entire subnet. Run [[#ibdiagnet - Show diagnostic information for entire subnet]]. Make sure to use {{ic|-ls }} with [[#Bandwidth|the proper signaling rate, which is likely the advertised speed of your card divided by 4]].<br />
# ibdiagnet -lw <expected number of virtual lanes -ls <expected signaling rate> -c 1000<br />
<br />
* Consider [[#Upgrade firmware|upgrading firmware]].</div>JudgeManganesehttps://wiki.archlinux.org/index.php?title=Steam&diff=575796Steam2019-06-17T06:53:24Z<p>JudgeManganese: /* Proton Steam-Play */ Add necessary last-mile information for getting started with and effectively using Steam Play</p>
<hr />
<div>[[Category:Gaming]]<br />
[[ja:Steam]]<br />
[[ru:Steam]]<br />
[[zh-hans:Steam]]<br />
{{Related articles start}}<br />
{{Related|Steam/Troubleshooting}}<br />
{{Related|Steam/Game-specific troubleshooting}}<br />
{{Related|Gaming}}<br />
{{Related|Gamepad}}<br />
{{Related|List of games}}<br />
{{Related articles end}}<br />
[http://store.steampowered.com/about/ Steam] is a popular game distribution platform by Valve.<br />
<br />
{{Warning|Steam native is currently broken on Arch Linux. For reference, see {{Bug|62095}}}}<br />
<br />
{{Note|Steam for Linux only supports Ubuntu LTS.[https://support.steampowered.com/kb_article.php?ref&#61;1504-QHXN-8366] Thus, do not turn to Valve for support for issues with Steam on Arch Linux.}}<br />
<br />
== Installation ==<br />
<br />
Enable the [[multilib]] repository and [[install]] the {{Pkg|steam}} package.<br />
<br />
The following requirements must be fulfilled in order to run Steam on Arch Linux:<br />
<br />
* Installed 32-bit version [[Xorg#Driver installation|OpenGL graphics driver]].<br />
* Generated [[Locale#Generating locales|en_US.UTF-8]] locale, preventing invalid pointer error.<br />
* The GUI heavily uses the Arial font. See [[Microsoft fonts]]. An alternative is to use {{Pkg|ttf-liberation}} or [[Steam/Troubleshooting#Text is corrupt or missing|fonts provided by Steam]] instead.<br />
* [[Install]] {{Pkg|wqy-zenhei}} to add support for Asian languages.<br />
<br />
=== SteamCMD ===<br />
<br />
Install {{AUR|steamcmd}} for the command-line version of [https://developer.valvesoftware.com/wiki/SteamCMD Steam].<br />
<br />
=== Alternative Flatpak installation ===<br />
<br />
Steam can also be installed with [[Flatpak]] as {{ic|com.valvesoftware.Steam}} from [https://flathub.org/ Flathub]. The easiest way to install it for the current user is by using the Flathub repo and flatpak command:<br />
<br />
flatpak --user remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo<br />
flatpak --user install flathub com.valvesoftware.Steam<br />
flatpak run com.valvesoftware.Steam<br />
<br />
The Flatpak application currently does not support themes. Also you currently can't run games via {{ic|optirun}}/{{ic|primusrun}}, see [https://github.com/flatpak/flatpak/issues/869 Issue#869] for more details.<br />
<br />
By default Steam won't be able to access your home directory, you can run the following command to allow it, so that it behaves like on Ubuntu or SteamOS:<br />
<br />
flatpak override com.valvesoftware.Steam --filesystem=$HOME<br />
<br />
==== Asian Font Problems with Flatpak ====<br />
<br />
If you are having problem getting Asian fonts to show in game, it's because org.freedesktop.Platform does not include it. First try mounting your local font :<br />
<br />
flatpak run --filesystems=~/.local/share/fonts --filesystem=~/.config/fontconfig com.valvesoftware.Steam<br />
<br />
If that doesn't work, consider this hack: make the fonts available by directly copying the font files into org.freedesktop.Platform's directories, e.g.<br />
<br />
# replace ? with your version and hash<br />
/var/lib/flatpak/runtime/org.freedesktop.Platform/x86_64/?/?/files/etc/fonts/conf.avail<br />
/var/lib/flatpak/runtime/org.freedesktop.Platform/x86_64/?/?/files/etc/fonts/conf.d <br />
/var/lib/flatpak/runtime/org.freedesktop.Platform/x86_64/?/?/files/share/fonts<br />
<br />
== Directory structure ==<br />
<br />
The default Steam install location is {{ic|~/.local/share/Steam}}. If Steam cannot find it, it will prompt you to reinstall it or select the new location. This article uses the {{ic|~/.steam/root}} symlink to refer to the install location.<br />
<br />
=== Library folders ===<br />
<br />
Every Steam application has a unique AppID, which you can find out by looking at its [http://store.steampowered.com/ Steam Store] page path.<br />
<br />
Steam installs games into a directory under {{ic|''LIBRARY''/steamapps/common/}}. {{ic|''LIBRARY''}} normally is <br />
{{ic|~/.steam/root}} but you can also have multiple library folders (''Steam > Settings > Downloads > Steam Library Folders'').<br />
<br />
In order for Steam to recognize a game it needs to have an<br />
{{ic|appmanifest_''AppId''.acf}} file in {{ic|''LIBRARY''/steamapps/}}. The appmanifest file uses the <br />
[https://developer.valvesoftware.com/wiki/KeyValues KeyValues] format and its {{ic|installdir}} property<br />
determines the game directory name.<br />
<br />
== Usage ==<br />
<br />
steam [ -options ] [ steam:// URL ]<br />
<br />
For the available command-line options see the [https://developer.valvesoftware.com/wiki/Command_Line_Options#Steam_.28Windows.29 Command Line Options article on the Valve Developer Wiki].<br />
<br />
Steam also accepts an optional Steam URL, see the [https://developer.valvesoftware.com/wiki/Steam_browser_protocol Steam browser procotol].<br />
<br />
== Launch options ==<br />
<br />
When you launch a Steam game, Steam executes its '''launch command''' in a [[Bash]] shell.<br />
To let you alter the launch command Steam provides '''launch options''',<br />
which can be set for a game by right-clicking on it in your library, selecting Properties and clicking on ''Set Launch Options''.<br />
<br />
By default Steam simply appends your option string to the launch command. To set environment variables or<br />
pass the launch command as an argument to another command you can use the {{ic|%command%}} substitute.<br />
<br />
=== Examples ===<br />
<br />
* only arguments: {{ic|-foo}}<br />
* environment variables: {{ic|1=FOO=bar BAZ=bar %command% -baz}}<br />
* completely different command: {{ic|othercommand # %command%}}<br />
<br />
== Tips and tricks ==<br />
<br />
=== Proton Steam-Play ===<br />
<br />
Valve developed a compatibility tool for Steam Play based on Wine and additional components. It allows you to launch many Windows games (see [https://www.protondb.com/ compatibility list]).<br />
<br />
It's open-source and available on [https://github.com/ValveSoftware/Proton/ Github]. Steam will install its own versions of Proton when Steam Play is enabled.<br />
<br />
Proton needs to be enabled on Steam client : {{ic|Steam > Settings > Steam Play}}. You can enable Steam Play for games that have and have not been whitelisted by Valve in that dialog.<br />
<br />
If needed, to force enable Proton or a specific version of Proton for a game, right click on the game, click {{ic|Properties > General > Force the use of a specific Steam Play compatibility tool}}, and select the desired version. Doing so can also be used to force games that have a Linux port to use the Windows version.<br />
<br />
You can also install Proton from AUR with {{AUR|proton}} or {{AUR|proton-git}}, but extra setup is required for them to work with Steam. See the Proton Github for details on how Steam recognizes Proton installs.<br />
<br />
=== Big Picture Mode without a window manager ===<br />
<br />
To start Steam in Big Picture Mode from a [[Display manager]], you can either:<br />
<br />
* Install {{AUR|steamos-compositor}}<br />
* Alternatively, install {{AUR|steamos-compositor-plus}}, which hides the annoying color flashing on startup of Proton games and adds a fix for games that start in the background<br />
* Manually add a Steam entry (''but you lose the steam compositor advantages: mainly you '''can't''' control Big Picture mode with keyboard or gamepad''):<br />
<br />
create a {{ic|/usr/share/xsessions/steam-big-picture.desktop}} file with the following contents: <br />
<br />
{{hc|/usr/share/xsessions/steam-big-picture.desktop|<nowiki><br />
[Desktop Entry]<br />
Name=Steam Big Picture Mode<br />
Comment=Start Steam in Big Picture Mode<br />
Exec=/usr/bin/steam -bigpicture<br />
TryExec=/usr/bin/steam<br />
Icon=<br />
Type=Application</nowiki>}}<br />
<br />
=== Steam skins ===<br />
<br />
The Steam interface can be customized using skins. Skins can overwrite interface-specific files in {{ic|~/.steam/root}}.<br />
<br />
To install a skin:<br />
<br />
# Place its directory in {{ic|~/.steam/root/skins}}.<br />
# Open ''Steam > Settings > Interface'' and select it.<br />
# Restart Steam.<br />
<br />
An extensive list of skins can be found in [http://forums.steampowered.com/forums/showthread.php?t=1161035 this Steam forums post].<br />
<br />
{{Note|Using an outdated skin may cause visual errors.}}<br />
<br />
==== Creating skins ====<br />
<br />
Nearly all Steam styles are defined in {{ic|~/.steam/root/resource/styles/steam.styles}} (the file is over 3,500 lines long). For a skin to be recognized it needs its own {{ic|resource/styles/steam.styles}}.<br />
When a Steam update changes the official {{ic|steam.styles}} your skin may become outdated, potentially resulting in visual errors.<br />
<br />
See {{ic|~/.steam/root/skins/skins_readme.txt}} for a primer on how to create skins.<br />
<br />
=== Changing the Steam notification position ===<br />
<br />
The default Steam notification position is bottom right.<br />
<br />
You can change the Steam notification position by altering {{ic|Notifications.PanelPosition}} in<br />
<br />
* {{ic|resource/styles/steam.styles}} for desktop notifications, and<br />
* {{ic|resource/styles/gameoverlay.styles}} for in-game notifications<br />
<br />
Both files are overwritten by Steam on startup and {{ic|steam.styles}} is only read on startup.<br />
<br />
{{Note|Some games do not respect the setting in {{ic|gameoverlay.styles}} e.g. XCOM: Enemy Unknown.}}<br />
<br />
==== Use a skin ====<br />
<br />
You can create a skin to change the notification position to your liking. For example to change the position to top right:<br />
<br />
$ cd ~/.steam/root/skins<br />
$ mkdir -p Top-Right/resource<br />
$ cp -r ~/.steam/root/resource/styles Top-Right/resource<br />
$ sed -i '/Notifications.PanelPosition/ s/"[A-Za-z]*"/"TopRight"/' Top-Right/resource/styles/*<br />
<br />
==== Live patching ====<br />
<br />
{{ic|gameoverlay.styles}} can be overwritten while Steam is running, allowing you to have game-specific notification positions.<br />
<br />
{{hc|~/.steam/notifpos.sh|<br />
sed -i "/Notifications.PanelPosition/ s/\"[A-Za-z]*\"/\"$1\"/" ~/.steam/root/resource/styles/gameoverlay.styles<br />
}}<br />
<br />
And the [[#Launch options]] should be something like:<br />
<br />
~/.steam/notifpos.sh TopLeft && %command%<br />
<br />
=== Steam Remote Play ===<br />
<br />
{{Note|Steam In-Home Streaming [https://store.steampowered.com/news/51761/ has became Steam Remote Play].}}<br />
<br />
Steam has built-in support for [http://store.steampowered.com/streaming/ remote play].<br />
<br />
See [https://steamcommunity.com/sharedfiles/filedetails/?id=680514371 this Steam Community guide] on how to setup a headless streaming server on Linux.<br />
<br />
==== Different subnets ====<br />
<br />
{{Remove|This workaround/hack seems to be no longer relevant since it became a feature of [https://store.steampowered.com/news/51761/ Steam Remote Play].}}<br />
<br />
Steam client will not be able to detect host if both are on different subnets, which is common case when using VPN to your home network. Even if both client and server can ping each other - steam client would still not be able to detect host, so you need to force it. To do it, start Steam with below command:<br />
<br />
$ steam -console<br />
<br />
Wait until Steam starts. Once it loaded, you will find extra tab named "Console". Open it and then paste below command with correct host IP address:<br />
<br />
connect_remote <host_ip>:27036<br />
<br />
You will see notification that you can now stream games from host machine.<br />
<br />
{{Note|If above doesn't work - Windows is likely blocking all incoming traffic from different subnets, which means any connections coming from VPN tunnel will be dropped. This also can be confirmed by simply performing ping requests without any response to Windows machine from your VPN client. To workaround this, configure (or disable) all Windows firewalls (including existing antiviruses).}}<br />
<br />
{{Tip|See [[Gaming#Remote gaming]] for alternatives if above solution does not work.}}<br />
<br />
=== Steam Controller ===<br />
<br />
Normally a Steam controller requires the use of the Steam-overlay. In non-Steam native Linux games however the overlay may not be practical. For that, while the Steam client is running it will maintain a "desktop configuration". With your Steam controller, configure the desktop configuration for it as a generic XBOX controller. As long as the Steam client is running you can then use your Steam controller in other games, such as GOG games, as an XBOX controller. Make sure to select your type of controller to map to in "general controller settings".<br />
<br />
<br />
<br />
== Troubleshooting ==<br />
<br />
See [[Steam/Troubleshooting]].<br />
<br />
== See also ==<br />
<br />
* [https://wiki.gentoo.org/wiki/Steam Gentoo Wiki article]<br />
* [https://pcgamingwiki.com/wiki/The_Big_List_of_DRM-Free_Games_on_Steam The Big List of DRM-Free Games on Steam] at PCGamingWiki<br />
* [http://steam.wikia.com/wiki/List_of_DRM-free_games List of DRM-free games] at Wikia<br />
* [http://store.steampowered.com/browse/linux Steam Linux store]<br />
* [https://github.com/ValveSoftware/Proton/ Proton] Compatibility tool for Steam Play based on Wine and additional components.</div>JudgeManganese