On Tuesday 24. November 2015 19.36.14 Paul Sokolovsky wrote:
Just as x86-32, ARMv7 has physical address extension http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438i/CHDCGI BF.html , so it can address more than 4Gb of physical memory. That still leaves 4Gb of virtual memory per process, and thanks god - bloating memory size doesn't mean growing its speed, so the more memory, the slower it all works.
4GB or 4Gb? I guess you mean the former. Again, I haven't kept up with this, so it's useful to know. I remember that the i386 architecture had support for larger address spaces, but I guess that it was more convenient to move towards the amd64 variant in the end.
Generally, it's pretty depressing to read this memory FUD on mailing list of "sustainable computing" project. What mere people would need more memory for? Watching movies? Almost nobody puts more than 1Gb because *it's not really needed*. And for sh%tty software, no matter if you have 1, 2, or 8GB - it will devour it and sh%t it all around, making the system overall work slower and slower with more memory. (I'm currently sitting on 16Gb box with constant 100% cpu load - it's Firefox collecting garbage in its 6Gb javascript heap - forever and ever).
FUD? Ouch! Thanks for classifying some pretty innocent remarks in such a negative way. For your information, my primary machine for the last ten years has only ever had 1GB - and I was making do with 128MB for years before that - and at times I have felt (or have been made to feel) behind the times by people who think everybody went amd64 and that nobody develops on 32-bit Intel any more.
Yes, software is bloated and we can often do what we need with less. Another interest of mine is old microcomputers where people used to do stuff with a few KB, just as you've discovered...
For comparison, my latest discovery is relation database engines which can execute queries in few *kilobytes* of RAM - https://github.com/graemedouglas/LittleD and Contiki Antelope http://dunkels.com/adam/tsiftes11database.pdf
It's worth bearing in mind that PostgreSQL was (and maybe still is) delivered with a conservative configuration that was aimed at systems of the mid- to late-1990s. Contrary to what people would have you believe, multi-GB query caches are usually a luxury, not a necessity.
Anyway, my point about memory still stands: you can get by with 256MB for sure, but people are wanting to run stuff that happens to use more than that. It's not their fault that distributions are steadily bloating themselves, and these users don't necessarily have the time or expertise to either seek out or rework more efficient distributions. Moreover, some of the leaner distributions are less capable to the extent that it really is worth evaluating what a bit of extra memory can deliver.
Finally, there are also genuine reasons for wanting more RAM: not for programs but for manipulating data efficiently. And on sustainability, after a while it will become rather more difficult to source lower-capacity components, and they may well be less energy-efficient than their replacements, too.
Some progress is actually worth having, you know.
Paul