[Arm-netbook] EOMA server standard

Dr. David Alan Gilbert dave at treblig.org
Sat Oct 27 12:51:45 BST 2012


* luke.leighton (luke.leighton at gmail.com) wrote:
> On Wed, Oct 24, 2012 at 7:13 PM, Dr. David Alan Gilbert
> <dave at treblig.org> wrote:
> > * luke.leighton (luke.leighton at gmail.com) wrote:
> >> for the pin-outs i figured that at least one 10GBase-T interface (8
> >> pins plus 8 GND spacers) would be acceptable, as would SATA-3 (4 pins
> >> plus 4 GND spacers).  that's 24 pins already (!).  PCI-Express 4x is
> >> 64 pins.  that's up to 88 *already*.  adding in USB3, it's not
> >> unreasonable to imagine this would be a 100-pin standard.
> >
> > The problem with 10G is that there are at least 3 copper and 2 fibre
> > standards to chose from, so chosing 10GB-t may not be right.  Most systems
> > have SFP pluggable modules so you can chose the type you use.
> 
>  ahh, now that could be something to work with.
> 
> > While fibre is currently common, I think that's mostly because it's used
> > for inter-rack/site mostly; but you don't need it for the machines in a rack.
> 
>  i'm kinda getting the impression that 10GbE is still complete
> overkill (e.g. suited to inter-rack).

I'd disagree - but that really depends what you're trying to do here; so I'm
going to have to ask you the Granny question again (sorry!) - i.e.:

   What's eoma-server aiming for?  What type of server are you looking to build
- is this an 'under the stairs' for home server, a 'small office' server where
it's one of a handful of servers, or 'data centre' thing where
they're expecting to be one of a zillion similar servers (like the baserock/
or various calxeda systems).

  And based on the previous question, what physically do you need;
is the PCMCIA type module really needed or is just standardising an edge
connector based card fine; certaily for something in a datacentre environment
where people are used to doing DIMMs I don't see the advantage of the
module, unless there is a lot in the module.

  As for 10G being overkill; well for a single core ARM, that's probably
true; but for a quad core A15, probably not; and certainly for 1Us worth
of how ever many A15s you can squeeze in certainly not - that's a heck
of a lot of compute.  Another way to think of 10G is as being 'more than 1G'
- so could your card want 2 or 3Gbps then a 10G interface may well be worth it.

Similarly there is the question of what do you want the interface for; in a data
centrer type thing you don't want a single interface; you're going to want
to be able to get at least 2, whether it be for some resilience when one
network goes pop, or one for management, one for your storage, and one
for the real network, or one for an external interface etc etc
(and yes some of those fit in vlans, some maybe not).



> > There is an electrical standard for connction prior to the phy, I think
> > that's xaui, I'm not sure if that's the same as the SFPconnections.
> 
>  XAUI's what the calxeda server SoC has.

Also see the baserock stuff ( http://www.baserock.com/servers ).

Dave
-- 
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \ 
\ gro.gilbert @ treblig.org |                               | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/



More information about the arm-netbook mailing list