From ArchWiki

This article or section is a candidate for moving to System limits.

Notes: This should cover setting limits in systemd configs as well. (Discuss in Talk:Limits.conf#Including systemd issues)

/etc/security/limits.conf allows setting resource limits for users logged in via PAM. This is a useful way of preventing, for example, fork-bombs from using up all system resources.

Note: The file does not affect system services. For systemd services the files /etc/systemd/system.conf, /etc/systemd/user.conf, and /etc/systemd/system/unit.d/override.conf control the limit. See the systemd-system.conf(5) man page for details.


The default file comes well-commented, but extra information can be gleaned by checking the limits.conf(5) man page.



Corefiles are useful for debugging, but annoying when normally using your system. You should have a soft limit of 0 and a hard limit of unlimited, and then temporarily raise your limit for the current shell with ulimit -c unlimited when you need corefiles for debugging.

*           soft    core       0           # Prevent corefiles from being generated by default.
*           hard    core       unlimited   # Allow corefiles to be temporarily enabled.


You should disallow everyone except for root from having processes of minimal niceness (-20), so that root can fix an unresponsive system.

*           hard    nice       -19         # Prevent non-root users from running a process at minimal niceness.
root        hard    nice       -20         # Allows root to run a process at minimal niceness to fix the system when unresponsive.


This limits the number of file descriptors any process owned by the specified domain can have open at any one time. You may need to increase this value to something as high as 8192 for certain games to work. Some database applications like MongoDB or Apache Kafka recommend setting nofile to 64000 or 128000[1].

*           hard    nofile     65535
*           soft    nofile      8192       # Required for certain games to run.
Warning: Setting this value too high or to unlimited may break some tools like fakeroot.


Having an nproc limit is important, because this will limit how many times a fork-bomb can replicate. However, having it too low can make your system unstable or even unusable, as new processes will not be able to be created.

A value of 300 is too low for even the most minimal of Window-managers to run more than a few desktop applications and daemons, but is often fine for an X-less server (In fact, 300 is the value that the University of Georgia's Computer Science department used for the undergrad process limit on its Linux servers in 2017.).

Here is an example nproc limit for all users on a system:

*           hard    nproc      2048        # Prevent fork-bombs from taking out the system.

Note that this value of 2048 is just an example, and you may need to set yours higher. On the flipside, you also may be able to do with it being lower.

Whatever you set your nproc to, make sure to allow your root user to create as many processes as it wants; else, you might make your system inoperable by setting the normal nproc limit too low. Note that this line has to come after the global hardlimit, and that the value below (65536) is arbitrary.

root        hard    nproc      65536       # Prevent root from not being able to launch enough processes


The default niceness should generally be 0, but you can set individual users and groups to have different default priorities using this parameter.

*           soft    priority   0           # Set the default priority to neutral niceness.