[Arm-netbook] Plug computer and FreedomBox-related devices

Luke Kenneth Casson Leighton lkcl at lkcl.net
Thu Sep 17 13:13:44 BST 2015


On Thu, Sep 17, 2015 at 12:04 PM, Paul Boddie <paul at boddie.org.uk> wrote:
> On Thursday 17. September 2015 01.34.15 Luke Kenneth Casson Leighton wrote:
>> On Wed, Sep 16, 2015 at 11:49 PM, Paul Boddie <paul at boddie.org.uk> wrote:
>> >
>> > Sure, I understand that. But what worries me a little is that experience
>> > isn't being gained to possibly refine the standard
>>
>>  paul, you misunderstand the concept of a simple long-term standard.
>
> Not really. What I meant by "refined" is actually this:
>
> [...]
>
>>  first revision was to remove SATA and replace it with a 2nd USB2.
>> second revision was to add VREFTTL, add SD/MMC and UART,  third (or
>> possibly still part of the 2nd) was to reduce 24-pin RGB/TTL to 18-pin
>> RGB/TTL and use the 4 spare lines for an SPI interface, also USB3 was
>> added at some point.  PWM and an extra EINT were also added.  the
>> fourth - and almost certainly final revision - has been very recent:
>> removal of Ethernet, upgrading to being able to do USB 3.1, as well as
>> add 2 more EINT lines and 3 more GPIOs.
>>
>>  those interfaces have all been very carefully considered, especially
>> when developing the ICubeCorp IC3128 CPU Card, where, due to its low
>> pincount and being a QFP, there's *literally* only 2 spare unused pins
>> left on the *entire* processor that don't go to the EOMA68 interface
>> or the SD/MMC boot card.
>
> All of this has happened before we get to the point where we call it a final
> standard, but what worries me is that there may be an application that hasn't
> yet been considered because the collective experience of trying to make
> devices using it is not broad enough.

 well, as well as the section on the elinux.org web site analysing and
tracking half a dozen different standards and their limitations,
that's what the section on "interfaces" in the white paper is for, to
go through the past (several decades of computing), learn from it,
predict where it's going, track that for 4 years, adjust the tracking
to make sure it fits, then re-predict, re-confirm, re-track and then
at some point say "ok done".

 it's precisely the knowledge of prior failed standards, paul, that's
kept me from going, like a drunken naive wannab computaa n00b "okaay
yeahh i wanna do a standurd now, let's put some inturfaciz in 4 fun,
make some lolli y not?" - translating that into english it reads
"create something then throw it out the door in under a year and hope
it works" *NO*.

 i recommend reading the sections on the white paper covering the
standard's development and justification (so that i don't have to
repeat it here) because i go into some depth to justify each of the
decisions that are made, including analysing and demonstrating how
long they've each been around, and how long they are likely to stay
around.


> I must admit that this is coloured by my interests in "retrocomputing" where
> one can look at products that were made and then consider how they might have
> been improved, even by a small amount, in a way that might have made them a
> lot more successful.

 well then, you would enjoy the anecdotes that i included in the white
paper, which include some historical and hilarious examples of exactly
that.

> At the time, you'd have some company or other designing
> and manufacturing their products to a tight schedule (usually to hit the
> market at the best time of year), but there would be limitations discovered by
> the customers that would limit the competitive lifetime of the product.

 ... or in the case of standards, not enough thought went into them,
so they cause utter confusion and mental melt-down in the minds of
adopters.  i am not joking but the only decent standards which do not
cause such complete melt-down are COM-Express and PC-104.  there's no
"optionalitis" in those standards (except COM-Express module sizes,
and that's ok).

 .... https://en.wikipedia.org/wiki/PC/104#Potential_Compatibility_Issues
 aaargh noooo, PC-104 is.... arg.  of course, because it's based
around the IBM PC which of course needs 12v, 5v and 3.3v, it's all
gone tits-up thanks to some carrier boards not properly supplying all
the required voltages, so the CPU boards themselves provide converters
to compensate... .which of course fucks everything up because the
difference between the I/O levels is enough to draw current one way or
the other and burn out components.

 this is *EXACTLY* why the Certification Mark is so damn important, so
as to be able to stomp from a Great Height on anyone not properly
following the EOMA68 standard.


> In any case, from what you've written, I guess we'll find out for ourselves
> soon enough about how successful the refinement process has been. Not that I
> think that it hasn't been successful enough, however.
>
> [...]
>
>> > Still, I wonder what those of us reading this list might be able to do to
>> > move the effort forward in our own way.
>>
>> that would be great.
>
> Any suggestions? :-)

 a couple which were in my reply (two product ideas), whilst as you
hint at most stuff now has to wait until i've got the prototypes for
the jz4775 and a20 cpu cards and the two products microdesktop and
laptop_15in done.

l.



More information about the arm-netbook mailing list