Don't forget free CULTURE.
Too much focus on illusions of grander security, minimizing the minimal black swans, then we forget we want a clean mental environment furthermore to have as well as give meaningful challenges with outcomes expressing the spirits of the souls experiencing the prerequisite difficulty and more.
We want diversity AND isolation to have a creative society. That requires redundancy, which requires excluding exclusive proprieties. Libre Propriety.
I want to clarify my illusion comment.
One cannot guarantee security unless they study their computer's entire operation including how each other program. Open source helps against unintentional glitches, only some designed defects, as well as bloatware, from a security front.
Linux's amplifying complexity hampers security.
Any excess instructions on the machine amplifies complexity, hampering security.
Open source enables free culture thereby enabling remix or customizing one's mental environment.
On 12/8/18, Jean Flamelle eaterjolly@gmail.com wrote:
Don't forget free CULTURE.
Too much focus on illusions of grander security, minimizing the minimal black swans, then we forget we want a clean mental environment furthermore to have as well as give meaningful challenges with outcomes expressing the spirits of the souls experiencing the prerequisite difficulty and more.
We want diversity AND isolation to have a creative society. That requires redundancy, which requires excluding exclusive proprieties. Libre Propriety.
On 12/9/18, Jean Flamelle eaterjolly@gmail.com wrote:
I want to clarify my illusion comment.
One cannot guarantee security unless they study their computer's entire operation including how each other program. Open source helps
how each program affects each other program, individually as well as in combination.*
On Sunday 9. December 2018 17.24.14 Jean Flamelle wrote:
Linux's amplifying complexity hampers security.
Of course, one could look more closely at microkernel-based systems for a possible remedy. Sadly, ever since the famous Torvalds versus Tanenbaum discussion, plenty of people cling to the remarks of the former as he sought to ridicule the work of the latter, oblivious to the fact that...
1. Microkernel performance was always a tradeoff (acknowledged by the DMERT work done by Bell Labs in the 1970s and in other contemporary work). 2. Performance has improved substantially over the years and in some cases wasn't that bad to begin with, either. 3. Billions of devices have shipped with microkernels.
Some people also probably cling to the idea that Torvalds "won" his debate. Now that MINIX 3 runs in every Intel CPU supporting Management Engine functionality, it is clear who actually won, at least in terms of the "bottoms on seats" measure of success that the Linux kernel developers tend to emphasise over things like GPL compliance by vendors (some of those vendors being Linux Foundation members, of course).
Paul
P.S. I wasn't going to dignify this thread because of the inflammatory terminology used in the subject suggesting some kind of irrational fervour. In fact, demanding and ensuring complete control over the technologies we use is a pragmatic and entirely justified strategy for anyone who cares about their data, their computing capabilities, and even their quality of life.
In fact, demanding and ensuring complete control over the technologies we use is a pragmatic and entirely justified strategy for anyone who cares about their data, their computing capabilities, and even their quality of life.
I think it goes much further than that: the control Apple/Google/Facebook/... have over a large proportion of devices can give them sufficient leverage to make it possible for them to control you as well even if you don't use their services (e.g. by them arranging to have people elected which in turn introduce legislation or executive orders that impact you).
You might argue we're not there yet, but the Cambridge Analytica affair makes it pretty obvious that it's a real possibility.
Stefan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On Sun, 09 Dec 2018 22:38:32 -0500 Stefan Monnier monnier@iro.umontreal.ca wrote:
In fact, demanding and ensuring complete control over the technologies we use is a pragmatic and entirely justified strategy for anyone who cares about their data, their computing capabilities, and even their quality of life.
I think it goes much further than that: the control Apple/Google/Facebook/... have over a large proportion of devices can give them sufficient leverage to make it possible for them to control you as well even if you don't use their services (e.g. by them arranging to have people elected which in turn introduce legislation or executive orders that impact you).
You might argue we're not there yet, but the Cambridge Analytica affair makes it pretty obvious that it's a real possibility.
Stefan
Everyone remembers CA, but I remember when the local news (ABC,NBC, someone), reported on Obama doing the same thing. I have not watched through all the reporting on youtube to locate the segment (and I did not have internet access back then, so no chance of Conservative talk show/news room poisoning), so I did a quick search: https://nypost.com/2018/03/20/obamas-former-media-director-said-facebook-was... It is wholly cited. https://www.politifact.com/truth-o-meter/statements/2018/mar/22/meghan-mccai... As is this one.
Together they paint a picture a lot larger than CA and the Republican party, a lot larger than Obama and the Democratic party.
Here are some more: https://yro.slashdot.org/story/18/07/02/2057256/google-allows-outside-app-de... https://www.wired.com/story/our-minds-have-been-hijacked-by-our-phones-trist... https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-sili... A poor excuse: https://www.cnn.com/2018/03/26/opinions/data-company-spying-opinion-schneier... Just how far it has gone: https://tech.slashdot.org/story/18/03/23/1811230/my-cow-game-extracted-your-...
When will the world wake up? It's not just A, it's A and B through Z!
Sincerely, David
On Mon, Dec 10, 2018 at 12:45:42AM +0100, Paul Boddie wrote:
Of course, one could look more closely at microkernel-based systems for a possible remedy. Sadly, ever since the famous Torvalds versus Tanenbaum discussion, plenty of people cling to the remarks of the former as he sought to ridicule the work of the latter, oblivious to the fact that...
- Microkernel performance was always a tradeoff (acknowledged by the DMERT work done by Bell Labs in the 1970s and in other contemporary work).
- Performance has improved substantially over the years and in some cases wasn't that bad to begin with, either.
- Billions of devices have shipped with microkernels.
Some people also probably cling to the idea that Torvalds "won" his debate. Now that MINIX 3 runs in every Intel CPU supporting Management Engine functionality, it is clear who actually won, at least in terms of the "bottoms on seats" measure of success that the Linux kernel developers tend to emphasise over things like GPL compliance by vendors (some of those vendors being Linux Foundation members, of course).
Just curious -- what microkernel systems are available to run on modern home computers just in case one is tired of Linux and wanting to try something else?
-- hendrik
On December 10, 2018 8:48:56 AM EST, Hendrik Boom hendrik@topoi.pooq.com wrote:
On Mon, Dec 10, 2018 at 12:45:42AM +0100, Paul Boddie wrote:
Of course, one could look more closely at microkernel-based systems
for a
possible remedy. Sadly, ever since the famous Torvalds versus
Tanenbaum
discussion, plenty of people cling to the remarks of the former as he
sought
to ridicule the work of the latter, oblivious to the fact that...
- Microkernel performance was always a tradeoff (acknowledged by
the DMERT
work done by Bell Labs in the 1970s and in other contemporary
work).
- Performance has improved substantially over the years and in some
cases
wasn't that bad to begin with, either.
- Billions of devices have shipped with microkernels.
Some people also probably cling to the idea that Torvalds "won" his
debate.
Now that MINIX 3 runs in every Intel CPU supporting Management Engine
functionality, it is clear who actually won, at least in terms of the
"bottoms
on seats" measure of success that the Linux kernel developers tend to
emphasise over things like GPL compliance by vendors (some of those
vendors
being Linux Foundation members, of course).
Just curious -- what microkernel systems are available to run on modern
home computers just in case one is tired of Linux and wanting to try something else?
MINIX and GNU Hurd both exist and work. Hardware support isn't great however, might not work on the specific machine you have.
-- hendrik
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook Send large attachments to arm-netbook@files.phcomp.co.uk
On Monday 10. December 2018 09.02.57 Adam Van Ymeren wrote:
On December 10, 2018 8:48:56 AM EST, Hendrik Boom hendrik@topoi.pooq.com
wrote:
Just curious -- what microkernel systems are available to run on modern home computers just in case one is tired of Linux and wanting to try something else?
MINIX and GNU Hurd both exist and work. Hardware support isn't great however, might not work on the specific machine you have.
Dave may have mentioned this previously on this list, but here are his articles about trying out the Hurd:
"Exploring Alternative Operating Systems – GNU/Hurd" http://www.boddie.org.uk/david/www-repo/Personal/Updates/2016/2016-08-08.htm...
"Back to the Hurd" http://www.boddie.org.uk/david/www-repo/Personal/Updates/2017/2017-06-30.htm...
He also looked at Inferno, which isn't a microkernel-based system as far as I know, but it has some interesting characteristics:
"Exploring Alternative Operating Systems – Inferno" http://www.boddie.org.uk/david/www-repo/Personal/Updates/2016/2016-09-01.htm...
Another interesting general-purpose system is HelenOS:
This is apparently microkernel-based and should work on modern home computers.
My mini-rant a few days ago discussed a tentative revival of the Hurd using L4-related technologies. Although the Hurd eventually got going by using the Mach microkernel (also seen in OSF/1, IBM Workplace OS, NeXTSTEP, Mac OS X, and so on), there have always been things in it that have been regarded as less than satisfactory. As far as I know now, there are certain limitations that conspire to produce pathological system behaviour, and this appears to have been difficult to remedy:
https://www.sceen.net/the-end-of-the-thundering-hurd/
The principal reason given for no longer pursuing things like Hurd-on-L4 seems to be that "we already have GNU/Linux" and that device drivers would need writing, alongside the suggestion that people could instead be working on other, supposedly more urgent things. The latter suggestion should be ignored: if someone wants to work on operating system fundamentals, they aren't necessarily going to find working on PDF reader software satisfying, especially if no-one is likely to be paying them for their volunteering, anyway.
I'm not going to claim that writing device drivers, driver frameworks, filesystems, and all the other things are easy enough. However, they do seem to get written over and over again, in alternative operating systems (of which there are many if you start looking) and in the Linux world itself. Moreover, it should be a more approachable activity in the microkernel world precisely because drivers are normal programs, not glued-in parts of the kernel framework that are subject to continual churn.
My current perception of the weaknesses of various microkernel technologies is that their developers mostly seem to be chasing niches, not general-purpose computing. That may be making some people good money, but it doesn't necessarily help the average computer user. Indeed, with things like Intel Management Engine, it can actually be harmful to users instead.
Paul
On Mon, Dec 10, 2018 at 1:49 PM Hendrik Boom hendrik@topoi.pooq.com wrote:
On Mon, Dec 10, 2018 at 12:45:42AM +0100, Paul Boddie wrote:
Of course, one could look more closely at microkernel-based systems for a possible remedy. Sadly, ever since the famous Torvalds versus Tanenbaum discussion, plenty of people cling to the remarks of the former as he sought to ridicule the work of the latter, oblivious to the fact that...
- Microkernel performance was always a tradeoff (acknowledged by the DMERT work done by Bell Labs in the 1970s and in other contemporary work).
- Performance has improved substantially over the years and in some cases wasn't that bad to begin with, either.
- Billions of devices have shipped with microkernels.
Some people also probably cling to the idea that Torvalds "won" his debate. Now that MINIX 3 runs in every Intel CPU supporting Management Engine functionality, it is clear who actually won, at least in terms of the "bottoms on seats" measure of success that the Linux kernel developers tend to emphasise over things like GPL compliance by vendors (some of those vendors being Linux Foundation members, of course).
Just curious -- what microkernel systems are available to run on modern home computers just in case one is tired of Linux and wanting to try something else?
SE/L4. one research group actually created a complete minimum-compliant POSIX subsystem on top of SE/L4, absolutely nothing to do with any operating system "per se", and then successfully ported an entire qt-based webkit browser *and all its dependencies* to run on it.
the "filesystem" was entirely flat. no subdirectories. so when i say "minimally compliant" it really really was "minimally compliant".
l.
So, as I (poorly) understand it, the idea of a "microkernel" is that each process/thread/application (I'm not quite sure which) gets its own kernel, sort of, and that this kernel is somewhat modular in that it only provides what functionality the application needs from it.
If I'm understanding that correctly -- which I very easily might not, I only have a somewhat abstract understanding of kernels to begin with, at best -- it seems to me that things like memory management suddenly become cooperative efforts, and that could very easily lead to what is typically non-technically referred to as a massive clusterf***.
Wouldn't it be easier/better, if you're going to rewrite code, to look at how the code is written now and find ways to make it more compact (or less sloppy, perhaps, as the case may be) while still providing the same functionality?
I recognize that we've come a quite long way from things like an Atari 2600, but when you consider the system resources of /that/ machine -- 4k ROM, 128 *bytes* of memory, a rather nastily-tempered, strict, and uncooperative graphics controller, and a CPU running at ~1MHz with no interrupt capability whatsoever -- and what all was done with it by coding tightly (and the occasional dirty trick or three) -- it seems to me, admittedly as a non-programmer, that there's a lot that could be done to streamline the behavior of modern operating systems and the applications that run within them.
For example, I'm typing this on a 32bit Win7 based HP Mini netbook with an Atom N450 CPU and 2gb RAM. It seems to me that playing Pandora Internet Radio in one browser window, with another browser window of nine tabs (and three of those are static JPEG images retrieved from a search engine, not proper webpages or anything), and with the file manager having one window open and another image displayed in an OS-resident image viewer -- that the described load ought not to very nearly lock the machine up entirely. And yet, it does -- which, it seems to me, indicates that the gentlefolk they're hiring over there in Redmond these days, simply do not understand how to code.
But, then, neither do I...
On Mon, Dec 10, 2018 at 09:11:49PM -0500, Christopher Havel wrote:
So, as I (poorly) understand it, the idea of a "microkernel" is that each process/thread/application (I'm not quite sure which) gets its own kernel, sort of, and that this kernel is somewhat modular in that it only provides what functionality the application needs from it.
If I'm understanding that correctly -- which I very easily might not, I only have a somewhat abstract understanding of kernels to begin with, at best -- it seems to me that things like memory management suddenly become cooperative efforts, and that could very easily lead to what is typically non-technically referred to as a massive clusterf***.
The idea of a microkernel is that it's the only thing running in a hardware-privileged mode, and that it does only what it absolutely has to do for the system to work. A big part of that is to manage the privileges given to other processes in the machine. File systems, networking, etc, etc, all operate outside the microkernel, and are protected against incursions from each other.
It's a security-based architecture. To a fair extent the system is protected against its own bugs.
Wouldn't it be easier/better, if you're going to rewrite code, to look at how the code is written now and find ways to make it more compact (or less sloppy, perhaps, as the case may be) while still providing the same functionality?
That would be good too, but is independent of whether you want to protect the system against its own errors. Making code less sloppy will of course also reduce the number of bugs there are to defend against.
I recognize that we've come a quite long way from things like an Atari 2600, but when you consider the system resources of /that/ machine -- 4k ROM, 128 *bytes* of memory, a rather nastily-tempered, strict, and uncooperative graphics controller, and a CPU running at ~1MHz with no interrupt capability whatsoever -- and what all was done with it by coding tightly (and the occasional dirty trick or three) -- it seems to me, admittedly as a non-programmer, that there's a lot that could be done to streamline the behavior of modern operating systems and the applications that run within them.
For example, I'm typing this on a 32bit Win7 based HP Mini netbook with an Atom N450 CPU and 2gb RAM. It seems to me that playing Pandora Internet Radio in one browser window, with another browser window of nine tabs (and three of those are static JPEG images retrieved from a search engine, not proper webpages or anything), and with the file manager having one window open and another image displayed in an OS-resident image viewer -- that the described load ought not to very nearly lock the machine up entirely. And yet, it does -- which, it seems to me, indicates that the gentlefolk they're hiring over there in Redmond these days, simply do not understand how to code.
I agree, it shouldn't lock up.
-- hendrik
On Tue, Dec 11, 2018 at 2:12 AM Christopher Havel laserhawk64@gmail.com wrote:
So, as I (poorly) understand it, the idea of a "microkernel" is that each process/thread/application (I'm not quite sure which) gets its own kernel, sort of, and that this kernel is somewhat modular in that it only provides what functionality the application needs from it.
not quite: a microkernel provides the absolute absolute minimum functionality to manage tasks and communication _between_ tasks. as a result they're typically extremely small, and usually are the only place where assembly code is needed.
L4ka ports of the linux kernel basically turn the entire linux "kernel" effectively into a user-space-like "application". you now have *three* levels: L4ka microkernel, linux "kernel", GNU/Linux OS.
the GNU/Hurd's microkernel inter-process communication is actually sufficiently abstracted such that processes may actually be run *off-machine*, by dropping in a full-on network-based RPC (remote procedure call) mechanism. hypothetically this would allow full migration of processes from one machine to another, without any interruption in the running of the user applications as they migrate.
l.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On Tue, 11 Dec 2018 03:05:45 +0000 Luke Kenneth Casson Leighton lkcl@lkcl.net wrote:
On Tue, Dec 11, 2018 at 2:12 AM Christopher Havel laserhawk64@gmail.com wrote:
So, as I (poorly) understand it, the idea of a "microkernel" is that each process/thread/application (I'm not quite sure which) gets its own kernel, sort of, and that this kernel is somewhat modular in that it only provides what functionality the application needs from it.
not quite: a microkernel provides the absolute absolute minimum functionality to manage tasks and communication _between_ tasks. as a result they're typically extremely small, and usually are the only place where assembly code is needed.
L4ka ports of the linux kernel basically turn the entire linux "kernel" effectively into a user-space-like "application". you now have *three* levels: L4ka microkernel, linux "kernel", GNU/Linux OS.
the GNU/Hurd's microkernel inter-process communication is actually sufficiently abstracted such that processes may actually be run *off-machine*, by dropping in a full-on network-based RPC (remote procedure call) mechanism. hypothetically this would allow full migration of processes from one machine to another, without any interruption in the running of the user applications as they migrate.
l.
That sounds really cool. No more FF pocket, or cloud, just use MicroLinux.
Sincerely, David
--- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
On Tue, Dec 11, 2018 at 2:12 AM Christopher Havel laserhawk64@gmail.com wrote:
For example, I'm typing this on a 32bit Win7 based HP Mini netbook with an Atom N450 CPU and 2gb RAM. It seems to me that playing Pandora Internet Radio in one browser window, with another browser window of nine tabs (and three of those are static JPEG images retrieved from a search engine, not proper webpages or anything), and with the file manager having one window open and another image displayed in an OS-resident image viewer -- that the described load ought not to very nearly lock the machine up entirely. And yet, it does -- which, it seems to me, indicates that the gentlefolk they're hiring over there in Redmond these days, simply do not understand how to code.
they do... they're "hampered" by some design decisions that relate to strategically significant "convenience", at the MSRPC (more specifically the DCOM) level.
DCOM is a way to transparently treat remote objects as if they were local. however it requires that the entire function call be serialised (marshalled) and unserialised (unmarshalled).
for *local* systems (local procedure calls, over a transport named "ncalrpc"), someone came up with the brilliant and simple idea of using shared memory. instead of doing a full serialisation, instead the data structures would be in shared memory, at a globally-fixed address.
the problem is: the *amount* of shared memory required effectively consumes a whopping 50% of available memory. on 32-bit systems they subdivided the memory into 2 halves, which in turn meant that *all* 32-bit applications were hard-limited to a maximum of 2GB of physical RAM, where the other 2GB absolutely had to be hard-reserved onto *real* RAM.
as you only have 2GB of RAM, and are running a modern web browser, a massive chunk of that physical 2GB will be reserved for global fixed-address shared memory, which in turn leaves pretty much absolutely nothing left as far as running web browsers is concerned.
thus, that netbook will be absolutely thrashing its nuts off, on swap-space.
bottom line: what the f*** are you doing running windows!!
:)
l.
@ All - thank you for a better understanding of microkernels. I learned more than a few things there.
@ Luke, re Win - it is one of two Win boxes I maintain. The other is a Dell XPS 15Z with the more odious Windows 10, which I need because I use a graphics application called CorelDRAW.
I have attempted the use of Inkscape - but, "bless their hearts" (as one often says, in my geographic location) they have no idea how to make a human-usable UI. The one time I tried it, it gave me fits and left me with far more questions than answers...
I maintain the Win7 netbook because of a promise I made to a close friend - he spent his childhood in front of various computers with the Commodore logo on them - and he recently gave me that collection, with tge request that I image the rather extensive disk library that it came with, so that if he ever wanted to fire up an emulator and muck around like a kid again, he could. Normally this requires a real DOS computer, and specialized software and cabling - as Commodore's disk drives used a different encoding scheme, at the magnetic level, from what PC drives use - but I recently acquired a device called a "ZoomFloppy", which enables the use of more modern equipment to do the PC side of the job - you still need a Commodore drive, mind you, but you are no longer mired in the world of the early 1990s (at the latest!) otherwise, which dramatically reduces the number of potential points of failure.
...as for why I'm using the netbook for anything else - that comes down to three things. One, I like sitting in my front room right now better than spending all day at the desk in the bedroom - which I desperately need to clean off. Two, one of my DIY laptops recently died spectacularly, and I'm awaiting parts for a rebuild. Three, the netbook offers a convenient stand-in for the dead laptop and is easily set up in my front room, whereas relocating the much larger, remaining DIY laptop from the bedroom desk would be a considerable effort indeed.
Also - I do not, in the given context, understand the term "marshalling" as you used it - could you elaborate, please...?
On Tuesday 11. December 2018 01.46.29 Luke Kenneth Casson Leighton wrote:
SE/L4. one research group actually created a complete minimum-compliant POSIX subsystem on top of SE/L4, absolutely nothing to do with any operating system "per se", and then successfully ported an entire qt-based webkit browser *and all its dependencies* to run on it.
Was this related to Genode or something else? I know that Genode supports/supported a WebKit browser - maybe Arora - and that it supports a range of microkernels, although the focus seems to be on using their favoured Nova microkernel instead these days [*].
the "filesystem" was entirely flat. no subdirectories. so when i say "minimally compliant" it really really was "minimally compliant".
That would be an odd decision to make given that it would need to have a filesystem of some kind and that beyond a rudimentary memory-based temporary filesystem, pretty much all of the ones out there have directories. If they'd gone to the trouble of porting a WebKit browser, porting an established filesystem would surely have been straightforward.
L4Re has virtual filesystem support, but it seems pretty limited in a number of ways, and the bulk of the heavy lifting needed to support dynamic linking seems to be left to a "rom" filesystem. But even that supports directories. (Of course, L4Re is not aimed at seL4, but there are fairly comparable things for seL4, I believe.)
Paul
[*] As is increasingly customary with various projects, I hesitate to depict Genode in any particular way, even after digging through the copious promotional/educational materials so that I might precipitate a coherent impression, in case it gets perceived as misrepresenting something or someone.
arm-netbook@lists.phcomp.co.uk