[Arm-netbook] EOMA-200 30W module standard

luke.leighton luke.leighton at gmail.com
Sat Apr 6 21:34:07 BST 2013


On Sat, Apr 6, 2013 at 8:16 PM, Dr. David Alan Gilbert <dave at treblig.org> wrote:
> * luke.leighton (luke.leighton at gmail.com) wrote:
>> moving forward from the EOMA-68 standard, which has a maximum limit of
>> around 3.5 watts, we felt that a larger more power-intensive standard
>> should be created, and have begun a preliminary draft:
>>
>> http://elinux.org/Embedded_Open_Modular_Architecture/EOMA-200
>>
>> as with all EOMA standards, there are NO and there will never be ANY
>> optional pins.  to compensate for differences in CPUs, SoCs and
>> designs however, the front-face of the module is permitted to have any
>> number connectors that will fit into the two slots - 30mm x 14mm and
>> 65mm x 14mm.
>
> Some thoughts and questions:
>   1) I'm not sure I'd have gone for specifiying quite as much backward compatibility;
> do you really need to specify that it can do 10Mbpps/etc - I think it might
> be reasonable to put a higher constraint; i.e. it must support 100Mbps
> with support for anything below it being optional.

 nothing must be optional - period.  i know that lesson.  it's divisive.

 short answer: yes.

 for example in my search for gigabit general-purpose
memory-controller ethernet ICs (AX88100 was what i found) i came
across a *ten* mb/sec GPMC ethernet IC :)  it's typically used by
people doing FPGA circuits, because the average low-cost FPGA can't
handle 100mbit/sec.

 ... which brings us onto the point that these standards *do not* have
to have a processor - they don't even have to have any kind of
processing *at all* (e.g. the eoma68 pass-through card).

 .... which brings us to the conclusion that to limit the
backwards-compatibility would be to limit the scope of the standard.

 so yes.

 the other point: i don't know of a single processor that *doesn't*
support 10mbps ethernet.... so why limit it?

 there are however quite a number of SoCs that limit their USB2 ports
to 480mb/sec (hi speed) only.  under these circumstances it is
*imperative* that people be aware that they are required to support
USB 1.1 peripherals across *all* ports.

 and, therefore in that instance, they would need to put in a
high-speed USB hub that compensated for the SoC's lack of support for
USB 1.1 speeds.

 so... yes!

>   2) Does the ethernet interface do auto crossover? It's a required feature of GigE
> interfaces and is implemented by some implementations at lower speeds.

 mmm.... i'd kinda left that out.  i wasn't aware that some
10/100mb/sec [only] PHYs do auto-negotiation.  but if it's a required
feature of GigE then that's fine, you kinda expect it to be there.

>   3) I'm not sure the PCIe is specified strongly enough; is it required that
> on a device which provides 4 lanes that it can be used as 4 separate interfaces?

 no - that's too complicated.  i thought about it, and no.

> (or 2x2 or 3 as 2x1+1x2 ?)

 no - definitely not.  the reason is this: look at e.g. the iMX6 - it
only has 1x PCIe.  other SoCs such as the exynos 5 have
reconfigureable PCIe, going as 4x or 2 2x.  but the *absolute* minimum
i.e. lowest common denominator is going to be a single-lane PCIe 1x at
Gen 1.0

 so, that has to be the minimum standard.  anything else is a burden
on *all* SoCs / CPUs.  can you *guarantee* that every single SoC and
CPU will have the level of reconfigureability you're talking about?

 because from what i can gather it's only certain specialist SoCs that
can do this level of reconfiguration.

 there is however one IC i just found today, which is a USB3-to-PCIe
bridge: it can do 2 lanes, reconfigureable to 2 1x lanes.  now, if you
were to specify that the standard could do that kind of
reconfiguration, then even if the latest-and-greatest Intel SoC had
PCIe Gen4 x4 PCIe, but it could not do reconfiguration, you would be
forced to abandon the use of that fantastic interface, and would be
forced to put in one of those slower USB3-to-PCIe bridge ICs!

 so no - absolutely not.  one lane, up-negotiable.  and that can be
negotiated automatically by the host and the devices, which is the
whole point.

 the whole point of the EOMA standards is: there are *no* optional
re-uses of pins.  absolutely none. without fail, absolutely none.  and
"reconfiguration" is, unfortunately, something that turns out to be
"optional reuse of pins".  so - it's out.


> Does the card always provide a PCIe host interface?

 yes, that's a requirement.  hm, i should say that.  thanks for spotting that.

> Are there minimum address space guarantees I can rely on?

 i don't think so!  my understanding is that PCIe is PCI which is AT
which is XT which is a memory-bus architecture, so any limitations on
address space is going to be either "32 bit or 64 bit", and i'd be
inclined to guess that even 32-bit would be enough.

> I don't know enough PCIe
> to know what else it needs to ask for but I suspect there is some more.

to be honest, me neither.  worth finding out.

>   4) I'm not sure it's that helpful to specify the PCIe is a requirement but
> then suggest bodging on a USB->PCIe bridge to make use of a SoC that doesn't
> have real PCIe; in many designs the card would be useless

 yep.  tough.  it's a free market: if people create such a module that
actually turns out to be lower-cost (because the bridge ICs turn out
to be lower-cost than a SoC which has PCIe built-in) and it turns out
to be shit, then people will learn the all-important lesson that "you
get what you pay for".

> - imagine a
> chassis that had put all it's interesting devices on PCIe (maybe a fast graphics
> card/storage controller etc) - and then gets a CPU that's actually connected
> via a USB connector - so in terms of selecting an EOMA card you'd find that
> your choice of card was correct but useless.

 yep.  you'd take it back to the shop, demand a refund, and/or would
buy a better module.

 on the other hand, it *might* be tolerable for your budget.  so i
can't rule out the possibility that *some* people might be happy (or
not even notice) that their low-cost budget system is operating at
speeds slower than they didn't even know they could increase if they'd
paid a bit more money.

err that was a complicated sentence, but i'm sure you get the point :)

anyway.  yes.  thank you david.  i'll make sure it's in the spec that
the PCIe controller must be in "host" mode.

l.



More information about the arm-netbook mailing list