Talk:Raspberry Pi

From ArchWiki
Jump to: navigation, search

Overclocking section

"The overclocked setting for CPU clock applies only when the governor throttles up the CPU, i.e. under load. To query the current frequency of the CPU:"

This is not entirely true. I had tried to post more information and it was edited. This is an overview wiki and it needs to be very concise, I get that. However, the problem is that the Pi kernel now defaults to the 'powersave' governor on boot up as a failsafe for an overclocking failure. After spending an afternoon playing around with minimum and maximum frequency settings I am convinced that the powersave governor is so safe that it ignores any use frequency range and the default factory Pi setting might be hard coded in (maybe a packager could verify that).

Things I noted in my tests with the powersave governor:

  • I set the minimum to the Modest setting in /boot/config and even with a 100% load of several dd commands running at a nice of -19 the overclocking never kicked in even with the processor.
  • There is not really a better way to switch modules at boot time than using userspace systemd tools (and defaulting to a sane frequency in the most desired behavor for an experimental platform). (See this-> [1])
  • The config.txt file on my Pi is incorrect in the comment '## Some over clocking settings, govenor already set to ondemand'

This should possible be noted in the Troubleshooting section? (e.g., cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq never reflects overclock setting even under high load. Make sure the powersave governor is not loaded).

I will file a bug report to have the incorrect comment replaced in config.txt.

—This unsigned comment is by Adsicks (talk) 20:44, 20 August 2014‎. Please sign your posts with ~~~~!

Assuming that the powersave governor does not throttle up the CPU, the note is still accurate. But I can confirm that Arch Linux ARM still uses ondemand as the default governor. -- Lahwaacz (talk) 20:58, 20 August 2014 (UTC)

Benchmarking comments

Alad - I believe these additions offer the layman some guidance when dealing with slow performance inspired by this forum post, not knowing why, and wanting to remedy it. I carefully selected the wording and metrics to avoid any particular brand promotion and to provide not objective but quantitative results that users can use as a point of comparison back to their own systems. How would you propose changing the content to not give, "an objective measure?" Graysky (talk) 22:55, 9 November 2015 (UTC)

I looked over the section again, and the first part (up until ".. for illustrative purposes") does look good - encouraging users to compare their own results, rather than try to make something conclusive for everyone like the old SSD benchmarking article. I'm still unconvinced on the part after. -- Alad (talk) 08:00, 10 November 2015 (UTC)
Well, one hallmark of a correctly designed scientific experiment is results relative to a control. Since there is not reference hardware available to users, I used my own experiences with a "slow" and "fast" card to serve as surrogates for people. I don't know how else to provide a positive control and I respectfully disagree with your assessment of that section. Graysky (talk) 08:51, 10 November 2015 (UTC)
Why is the section on the Raspberry Pi page on ArchWiki? It is neither specific to Rpi nor Arch.
What is the purpose of the section? If people have fast enough card from the beginning, they don't care about the section, and if their card is slow, they don't need to be told how to verify that their card is slow.
As to the scientific accuracy, why do you compare only Class 10 vs. UHS-I? Some more extensive benchmarks show that even Class 6 or 4 can be faster on random write than Class 10 [2], and similarly some Class 10 cards can be faster than UHS-I [3]. Your experiments only shows the results for your particular cards, in a totally unreproducible way without even mentioning the vendor and model names. From what I can tell you're comparing an aged low-end Class 10 card with a brand new high end UHS-I model. Edit: Also, the conditions of the experiment are subjective, because the performance depends on the used filesystem, kernel version and configuration (namely the virtual memory settings). Optimizing performance is not solely about the hardware.
-- Lahwaacz (talk) 14:03, 10 November 2015 (UTC)
the answer to your first question is in the text I wrote. Yes, there is disagreement on the web on the topic to your 2nd point as is the case for many topics on the web. Yes, my results are consistent with my hardware; I also explained this in the text. I disagree with your assessment that what I wrote is "totally unreproducible" for the reasons you cited. I would challenge you to take two different cards of different vintages and show similar differences in the random writes and in a discrete pacman installation as I have done. I suspect you draw similar conclusions. You aren't looking for exactly the same values I show but you are looking for similar fold differences. Finally, I providing additional details can be done, but I want to keep the section brief. Perhaps breaking it out into its own page, distilling the main point down to a few sentences, and linking it is more appropriate for the article. Graysky (talk) 09:45, 11 November 2015 (UTC)
See what reproducibility is about. Without exact description of the subject of the experiment (i.e. the card's vendor and model names), it cannot be reproducible. Even then the conclusion would be subjective because 2 cards is not statistically significant sample. The point of citing the external benchmarks above was that you don't necessarily need UHS-I card to achieve better performance than Class 10, as your text was suggesting. Therefore, even the relative difference in the performance of your own 2 cards is meaningless. -- Lahwaacz (talk) 13:45, 11 November 2015 (UTC)
I guess I don't really care about the edits enough to further the discussion here. I will consider some transition between the iozone bit I wrote, and the link you provided when I get a minute. I do feel that capturing the delay during system updates is key since that is what alerted me and at least one other user (see the arch arm forum post I linked) to the problem. It stands to reason that others might have these RPi2/RPi delays on file writes and think that somehow the distro is to blame, or the IO bus is saturated when in fact neither is the cause. Graysky (talk) 21:38, 12 November 2015 (UTC)
The key to the interruptions during system updates is the virtual memory options. If you e.g. increase vm.dirty_ratio, you will get less frequent interruptions, but obviously they will be longer. This configuration is completely general, only desktop users don't care anymore since the HDDs and SSDs are so damn fast these days... -- Lahwaacz (talk) 12:29, 14 November 2015 (UTC)