From ArchWiki
Jump to: navigation, search


From kernel doc:

tcp_rfc1337 - BOOLEAN
	If set, the TCP stack behaves conforming to RFC1337. If unset,
	we are not conforming to RFC, but prevent TCP TIME_WAIT
	Default: 0

So, isn't 0 the safe value? Our wiki says otherwise. -- Lahwaacz (talk) 08:56, 17 September 2013 (UTC)

With setting 0 the system would 'assassinate' a socket in time_wait prematurely upon receiving a RST. While this might sound like a good idea (it frees up a socket quicker), it opens the door for tcp sequence problems/syn replay. Those problems were described in RFC1337 and enabling the setting 1 is one way to deal with them (letting TIME_WAIT packets idle out even if a reset is received, so that the sequence number cannot be reused meanwhile). The wiki is correct in my view. Kernel doc is wrong here - "prevent" should read "enable". --Indigo (talk) 21:12, 17 September 2013 (UTC)
Since this discussion is still open: An interesting attack to the kernels implementation of the related RFC5961 was published yesterday under cve2016-5696. I have not looked into it enough to form an opinion whether leaving default 0 or 1 for this setting makes any difference to that, but it is exactly the kind of sequencing attack I was referring to three years back. --Indigo (talk) 08:38, 11 August 2016 (UTC)
Any news about this? Does anyone already performed more research and analysis about this? Timofonic (talk) 17:51, 31 July 2017 (UTC)

Virtual memory

The official documentation states that these two variables "Contain[s], as a percentage of total available memory that contains free pages and reclaimable pages,..." and that "The total available memory is not equal to total system memory.". However the comment underneath talks about them as if they were a percentage of system memory, making it quite confusing, e.g. I have 6GiB of system memory but only 1-2GiB available.

Also the defaults seem to have changed, I have dirty_ratio=50 and dirty_background_ratio=20.

-- DoctorJellyface (talk) 08:27, 8 August 2016 (UTC)

Yes, I agree. When I changed the section a little with [1], I left the comment. The reason was that while it simplifies in current form, expanding it to show the difference between system memory and available memory first and only then calculate the percentages makes it cumbersome/complicated to follow. If you have an idea how to do it, please go ahead. --Indigo (talk) 09:07, 8 August 2016 (UTC)
I would like this to be explained as an "introduction" to both concepts to avoid missconfiguration. I somewhat think to understand it, but I have some caveats about it (available memory after booting pre or post systemd? available memory while using the system? etc.). Despite there may be documents explaining it, it would make the document more friendly to read. Of course, there can be links to more specific documents to know more about it. Timofonic (talk) 18:14, 31 July 2017 (UTC)
The problem is that the kernel docs don't explain what does "available memory" really mean. Assuming that it changes similarly to what free shows, taking the system memory instead is still useful to prepare for the worst case. -- Lahwaacz (talk) 09:11, 8 August 2016 (UTC)
Yes, worst case also because "available" should grow disproportionately, because some slices, like system memory reserved for the bios or GPU will not change, regardless of total installed ram. I've had my go with [2]. --Indigo (talk) 07:54, 9 August 2016 (UTC)
I'm still not sure about certain parameters: Default examples are provided, but not all of them are explained about why these numbers are used and how it can be calculated on different use cases. I may be wrong, but this article should provide a more comprehensive and pedagogically explanation of each concept compared to the Linux kernel documentation (that I assume is more focused on developers), explaining each of the best "default" values and how to tune them depending on system usage. From my limited perspective, I would like each parameter taking in mind different types of systems: desktop, (average, low latency at interactive operations, low latency at interactive operations and taking into account intensive software (COMPILING OVER GCC/LLVM/WHATEVER, tons of windows/tabs in a web browser, big apps done in interpreter/bytecode such as Python/Java/Mono, etc)), Server (Server with some interactivity: Providing HTPC features). I have no more ideas but just lots of questions, sorry. I hope someone with more knowledge is able to discuss this and provide some more explained information at least. Thanks for your efforts, Arch community is a great place to be Timofonic (talk) 18:14, 31 July 2017 (UTC)

added vfs_cache_pressure parameter

let me know if it's OK --nTia89 (talk) 18:15, 26 December 2016 (UTC)

Fine with me. Cheers. --Indigo (talk) 15:05, 27 December 2016 (UTC)
That's okay, thanks a lot for adding it. But why did you choose 60 as parameter? Where's the logic behind this? May it be changed depending on certain situations of usage of the system? And if yes, how to be sure to change it in a correct way? Timofonic (talk) 18:18, 31 July 2017 (UTC)

Troubleshooting: Small periodic system freezes

This is something happens eventually on my system, specially when having a considerable amounts of tabs opened in different window (50-70 on 4-5 windows, for example).

- Dirty bytes: Why using the 4M value? Are there an explanation about this? Can it be fine tuned? What does it mean? - Change kernel.io_delay_type: There's a list of different ones, but zero exmplanation about them. What does it mean each one? How it can change the behaviour of the system? How to find what can be the best one for the system?

Sorry for asking to much, I'm trying to understand certain concepts that are still difficult for me. I'm sorry if there's already good sources about them, I was unable to locate these. Thanks for your patience.

Timofonic (talk) 18:27, 31 July 2017 (UTC)