From richard.wilbur at gmail.com Thu Mar 1 20:33:30 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 13:33:30 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Wed, Feb 28, 2018 at 2:41 PM, Luke Kenneth Casson Leighton wrote: > On Wed, Feb 28, 2018 at 9:09 PM, Richard Wilbur > wrote: >> After realizing that you mentioned all 8 GPIO lines were on the 20-pin >> expansion header J5 in the microdesktop case, I consulted the >> microdesktop schematic for clues. >> >> I suspect the UART and EOMA I2C pins should be left to those functions. > > yehyeh. UART implicitly tested "if console works it's probably > good" and I2C with a bus scan, i2c-utils, if 0x51 EEPROM shows up, > it's good. > >> I have added tables to the "Testing"[*] page under the "GPIO" section >> with my nominations for which pins to test and their mapping back to >> A20 register bits. > > awesome. it'll have to be done manually for now, Are you suggesting that the testing "will have to be done manually"? What is the time frame of "for now"? I'm trying to figure out which pins of the expansion header we want to test, which pins of the processor those correspond to, and thus which registers and bits of those registers we need to manipulate. That determines how I need to interact with the GPIO driver. >> Luke, does this match your understanding of the GPIO pins to test? > > yep - GPIO_19,20,21 missing. In the following table (created while I was trying to figure out which GPIO were connected in the EOMA standard) you will see that EOMA nets GPIO(18)/EINT3, GPIO(19), GPIO(20), and GPIO(21) are not connected on the microdesktop schematic v1.7 from J14. Thus they are at J14 but not available anywhere else in the microdesktop v1.7. 1342 Fri 23 Feb 2018: EOMA A20 DS113 microdesktop Net Name ball register CON15 pin J14 pin PWM B19 PI3 43 22 GPIO(10) EINT0 A6 PH0 63 32 GPIO(11) EINT1 B6 PH1 17 9 GPIO(16) EINT2 B2 PH14 44 56 PWFBOUT GPIO(17) EINT3 C2 PH18 39 20 NC GPIO(18) GPIO(19) A1 PH15 40 54 NC GPIO(20) C1 PH17 41 21 NC GPIO(21) B1 PH16 42 55 NC We could obviously create a v1.8 schematic for the microdesktop and connect these EOMA nets to a header, if desired. From lkcl at lkcl.net Thu Mar 1 20:50:00 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 1 Mar 2018 20:50:00 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Thu, Mar 1, 2018 at 8:33 PM, Richard Wilbur wrote: > On Wed, Feb 28, 2018 at 2:41 PM, Luke Kenneth Casson Leighton > wrote: >> On Wed, Feb 28, 2018 at 9:09 PM, Richard Wilbur >> wrote: >>> After realizing that you mentioned all 8 GPIO lines were on the 20-pin >>> expansion header J5 in the microdesktop case, I consulted the >>> microdesktop schematic for clues. >>> >>> I suspect the UART and EOMA I2C pins should be left to those functions. >> >> yehyeh. UART implicitly tested "if console works it's probably >> good" and I2C with a bus scan, i2c-utils, if 0x51 EEPROM shows up, >> it's good. >> >>> I have added tables to the "Testing"[*] page under the "GPIO" section >>> with my nominations for which pins to test and their mapping back to >>> A20 register bits. >> >> awesome. it'll have to be done manually for now, > > Are you suggesting that the testing "will have to be done manually"? the mapping created manually. sorry, i was thinking in terms of device-tree fragments... which don't exist yet. > What is the time frame of "for now"? when testing is required. > I'm trying to figure out which pins of the expansion header we want to > test, which pins of the processor those correspond to, and thus which > registers and bits of those registers we need to manipulate. That > determines how I need to interact with the GPIO driver. yehyeh. and determining that interaction "has to be done manually". if the devicetree fragment existed it would be a much simpler matter. >>> Luke, does this match your understanding of the GPIO pins to test? >> >> yep - GPIO_19,20,21 missing. > > In the following table (created while I was trying to figure out which > GPIO were connected in the EOMA standard) you will see that EOMA nets > GPIO(18)/EINT3, GPIO(19), GPIO(20), and GPIO(21) are not connected on > the microdesktop schematic v1.7 from J14. Thus they are at J14 but > not available anywhere else in the microdesktop v1.7. yep, forgot that. why the heck did i leave them out?? duur... > 1342 Fri 23 Feb 2018: > EOMA A20 DS113 microdesktop > Net Name ball register CON15 pin J14 pin > PWM B19 PI3 43 22 GPIO(10) > EINT0 A6 PH0 63 32 GPIO(11) > EINT1 B6 PH1 17 9 GPIO(16) > EINT2 B2 PH14 44 56 PWFBOUT GPIO(17) > EINT3 C2 PH18 39 20 NC GPIO(18) > GPIO(19) A1 PH15 40 54 NC > GPIO(20) C1 PH17 41 21 NC > GPIO(21) B1 PH16 42 55 NC > > We could obviously create a v1.8 schematic for the microdesktop and > connect these EOMA nets to a header, if desired. yes. damn. i think it's probably that i didn't update the micro-desktop schematic when i changed the EOMA68 spec from 24-pin to 18-pin RGB/TTL. l. From richard.wilbur at gmail.com Thu Mar 1 21:25:46 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 14:25:46 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 1:50 PM, Luke Kenneth Casson Leighton wrote: > On Thu, Mar 1, 2018 at 8:33 PM, Richard Wilbur wrote: >> We could obviously create a v1.8 schematic for the microdesktop and >> connect these EOMA nets to a header, if desired. > > yes. damn. i think it's probably that i didn't update the > micro-desktop schematic when i changed the EOMA68 spec from 24-pin to > 18-pin RGB/TTL. Maybe you didn't update the micro-desktop schematic as much as you intended to when you changed the EOMA68 spec from 24-pin to 18-pin RGB/TTL but in v1.7 you did at least connect only the 6 high bits of each color to the D/A convertors. The new pins are all connected to useful sub-circuits on the micro-desktop board: SPI (expansion header), SD0 (SD slot), and PWRON (power switch). From richard.wilbur at gmail.com Thu Mar 1 21:46:31 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 14:46:31 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: It looks to me like the fastest way to test the GPIO lines connected on the micro-desktop board to VESA_SCL and VESA_SDA would simply be to connect a VGA monitor to the micro-desktop and make sure it is properly detected and a test image looks right on it. This would leave 6 GPIO lines to test. So we get better test coverage for the EOMA interface and shorter GPIO test at the same time! From laserhawk64 at gmail.com Thu Mar 1 21:54:15 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Thu, 1 Mar 2018 16:54:15 -0500 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: Oh LOL. VGA is analog, and has six wires for color (red signal, red ground, ditto each for blue and green). It's not /exactly/ serial (serial as I understand it is inherently digital, which VGA is *ahem* very much not) but the paradigm sort of fits. RGBTTL is parallel. You have one wire per bit of color. So that's 18 wires. Plus your sync lines... which may or may not match VGA signal standards, I'm not sure. If you actually manage to figure out how to get that hooked up correclty, let me know ;) (Hint, it's doable, but you need additional components. There's a cheap way, and there's an easy way, and they are two *very* different ways...) Much easier suggestion: get a small LCD. *ANY* small LCD. Like a five or seven inch display at the largest. Raw panel, no driver board. Get the datasheet and a compatible connector. (If you source from eBay this is very easy; those are almost all commodity displays with available datasheets.) If it's a SMALL DISPLAY it *will* be RGBTTL, 90%+ of the time (I've seen one exception to this ever and it was in an off-brand portable DVD player). Wire it up. Wire it to the card connector. Add power. If you get a screen that works, you've done it right. From richard.wilbur at gmail.com Thu Mar 1 22:02:04 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 15:02:04 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: If we did decide to roll a v1.8 micro-desktop board, it would afford us the opportunity to bring two of the presently unconnected GPIO18-21 lines to the expansion header in place of VESA_SCL and VESA_SDA (which are after all available on pins 15 and 12 of the VGA connector). If VESA_SCL and VESA_SDA are more useful on the expansion header then, by all means, forget this suggestion. The other option to accommodate all our GPIO goodness would be to replace J5 (2x10 header) with a 2x11 or 2x12 header allowing us to bring all the GPIO pins to the expansion header (the only difference being whether we would prefer to retain VESA_SCL and VESA_SDA in the header). From laserhawk64 at gmail.com Thu Mar 1 22:05:41 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Thu, 1 Mar 2018 17:05:41 -0500 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: ...BTW, those SCL and SDA lines on a VGA connector are for a nifty signal coming from your monitor. It's called EDID and it's basically how every modern OS magically knows what to do with the monitor it wants to display on, regardless of the specs or origin of said monitor. If you've ever had a cheap VGA cable where all the pins are present on the connectors but those two lines are disconnected internally, you have experience with what happens when you eff with those wires. Best to leave them alone! On Thu, Mar 1, 2018 at 5:02 PM, Richard Wilbur wrote: > If we did decide to roll a v1.8 micro-desktop board, it would afford > us the opportunity to bring two of the presently unconnected GPIO18-21 > lines to the expansion header in place of VESA_SCL and VESA_SDA (which > are after all available on pins 15 and 12 of the VGA connector). If > VESA_SCL and VESA_SDA are more useful on the expansion header then, by > all means, forget this suggestion. > > The other option to accommodate all our GPIO goodness would be to > replace J5 (2x10 header) with a 2x11 or 2x12 header allowing us to > bring all the GPIO pins to the expansion header (the only difference > being whether we would prefer to retain VESA_SCL and VESA_SDA in the > header). > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk > From richard.wilbur at gmail.com Thu Mar 1 22:42:02 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 15:42:02 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 2:54 PM, Christopher Havel wrote: > Oh LOL. > > VGA is analog, and has six wires for color (red signal, red ground, ditto > each for blue and green). It's not /exactly/ serial (serial as I understand > it is inherently digital, which VGA is *ahem* very much not) but the > paradigm sort of fits. RGBTTL is parallel. You have one wire per bit of > color. So that's 18 wires. Plus your sync lines... which may or may not > match VGA signal standards, I'm not sure. > > If you actually manage to figure out how to get that hooked up correclty, > let me know ;) > > (Hint, it's doable, but you need additional components. There's a cheap > way, and there's an easy way, and they are two *very* different ways...) Looking at the micro-desktop schematic it seems Luke has this issue well in hand. Christopher have you seen the micro-desktop schematic? The VGA conversion is on page 3. Luke, have you tested the D/A circuit on the micro-desktop board? Only thing I would worry about is the hold time on the data lines. If the A20 sets up the data quickly (relative to the pixel time) and holds it until the next setup, we should be in good shape. > Much easier suggestion: get a small LCD. *ANY* small LCD. Like a five or > seven inch display at the largest. Raw panel, no driver board. Get the > datasheet and a compatible connector. (If you source from eBay this is very > easy; those are almost all commodity displays with available datasheets.) > If it's a SMALL DISPLAY it *will* be RGBTTL, 90%+ of the time (I've seen > one exception to this ever and it was in an off-brand portable DVD player). > Wire it up. Wire it to the card connector. Add power. If you get a screen > that works, you've done it right. I think this is why Luke put the display signals on the EOMA68 standard in the RGBTTL format--to simplify the job of connecting to LCD's. (I'm thinking of the laptop, tablet, gaming console, phone, etc.) From richard.wilbur at gmail.com Thu Mar 1 22:50:54 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 15:50:54 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 3:05 PM, Christopher Havel wrote: > ...BTW, those SCL and SDA lines on a VGA connector are for a nifty signal > coming from your monitor. It's called EDID and it's basically how every > modern OS magically knows what to do with the monitor it wants to display > on, regardless of the specs or origin of said monitor. > > If you've ever had a cheap VGA cable where all the pins are present on the > connectors but those two lines are disconnected internally, you have > experience with what happens when you eff with those wires. Best to leave > them alone! Christopher, you are quite correct about the usefullness of untarnished VESA EDID. Turns out I've worked with it before and respect its utility with respect to VGA/DVI/HDMI monitors. We are simply talking about how to test the DS-113 EOMA68-A20 processor cards when they come to the end of the assembly line. In that regard, our discussion is mainly about how to create a test jig/fixture that has the most complete coverage of the signals available on the EOMA68 interface and some of the possible use scenarios. We also have an interest in time efficiency as a matter of economy. From laserhawk64 at gmail.com Thu Mar 1 23:12:24 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Thu, 1 Mar 2018 18:12:24 -0500 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: I /designed/ that circuitry in the micro-desktop. I still have the paper copy somewhere... You can also do it with a dedicated DAC chip, which is the easy-but-expensive way I hinted at. But we aren't testing /that/ part -- the micro-desktop -- are we? If we're testing the /card/, the card does not output anything remotely like VGA, and, therefore, some kind of conversion is necessary in order to attach it to a VGA cable as was being proposed in the email I replied to about that. All you really need for this is a laptop PCMCIA or CardBus card cage, an IDE cable or two, a couple 4051s and toggle switches on a slice of perfboard, a 9v battery with connector and switch, and a cheap USB logic analyzer attached to a laptop. You use the 4051s, switched manually, and powered by the 9v battery, to act as input expanders for the logic analyzer. Each 4015 turns one channel into eight and requires three "on-on" switches -- with one "on" wired to +9v, one to ground, and the common to the chip. You use the IDE cable for the wires ;) If you hook it up so that you have one 4051 mux per logic analyzer channel, that'll give you 128 (!) channels to switch with -- most USB logic analyzers, even the super cheap ones, are 16-channel... Heck, if you wanted to make the circuit "complicated" -- I could draw up something that automatically iterated through the channels for you at the press of a single button, switching at variable speed with a pot, a 555, a resistor and cap, and a couple 4017s and 4051s. You'd only need /one/ channel for that -- so you could even use an o-scope there. Heck, I could do it with that circuit and my old, old Tektronix 422... I'm honestly surprised that this sort of idea hasn't been mentioned yet. From richard.wilbur at gmail.com Fri Mar 2 00:03:55 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 17:03:55 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 4:12 PM, Christopher Havel wrote: > I /designed/ that circuitry in the micro-desktop. I still have the paper > copy somewhere... Very nice! > You can also do it with a dedicated DAC chip, which is the > easy-but-expensive way I hinted at. > > But we aren't testing /that/ part -- the micro-desktop -- are we? If we're > testing the /card/, the card does not output anything remotely like VGA, > and, therefore, some kind of conversion is necessary in order to attach it > to a VGA cable as was being proposed in the email I replied to about that. We aren't planning to test the micro-desktop. The planning is for tests of the card mounted in a micro-desktop case to use as a test fixture. We are planning to use your good work on the micro-desktop case to our advantage and connect the VGA cable to the micro-desktop VGA connector in order to see that the EOMA68 RGBTTL (with EDID) works as advertised! > All you really need for this is a laptop PCMCIA or CardBus card cage, an > IDE cable or two, a couple 4051s and toggle switches on a slice of > perfboard, a 9v battery with connector and switch, and a cheap USB logic > analyzer attached to a laptop. You use the 4051s, switched manually, and > powered by the 9v battery, to act as input expanders for the logic > analyzer. Each 4015 turns one channel into eight and requires three "on-on" > switches -- with one "on" wired to +9v, one to ground, and the common to > the chip. You use the IDE cable for the wires ;) If you hook it up so that > you have one 4051 mux per logic analyzer channel, that'll give you 128 (!) > channels to switch with -- most USB logic analyzers, even the super cheap > ones, are 16-channel... > > Heck, if you wanted to make the circuit "complicated" -- I could draw up > something that automatically iterated through the channels for you at the > press of a single button, switching at variable speed with a pot, a 555, a > resistor and cap, and a couple 4017s and 4051s. You'd only need /one/ > channel for that -- so you could even use an o-scope there. Heck, I could > do it with that circuit and my old, old Tektronix 422... > > I'm honestly surprised that this sort of idea hasn't been mentioned yet. That is a cool way to set up a very wide logic analyzer. We were planning to use a little specialized hardware and less elbow grease to make our test fixture: * USB devices connected to the micro-desktop case USB ports, * SD peripheral connected to the micro SD slot, * VGA monitor connected to the VGA connector, * serial terminal connected to the UART pins in expansion header From laserhawk64 at gmail.com Fri Mar 2 00:10:15 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Thu, 1 Mar 2018 19:10:15 -0500 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: Posting from my phone while making dinner, so forgive that it's a top-post plz. Testing via the micro desktop works as long as you've got a known good micro desktop and your ports haven't won through. I think the 4051 idea might be a little better - I've worn out USB ports before, just from using them - ask me sometime about my mother's old VAIO laptop and how it ultimately died... the only thing in my test rig to wear out is the card cage... But, I'm not in charge, so I'll defer. On Mar 1, 2018 7:04 PM, "Richard Wilbur" wrote: > On Thu, Mar 1, 2018 at 4:12 PM, Christopher Havel > wrote: > > I /designed/ that circuitry in the micro-desktop. I still have the paper > > copy somewhere... > > Very nice! > > > You can also do it with a dedicated DAC chip, which is the > > easy-but-expensive way I hinted at. > > > > But we aren't testing /that/ part -- the micro-desktop -- are we? If > we're > > testing the /card/, the card does not output anything remotely like VGA, > > and, therefore, some kind of conversion is necessary in order to attach > it > > to a VGA cable as was being proposed in the email I replied to about > that. > > We aren't planning to test the micro-desktop. The planning is for > tests of the card mounted in a micro-desktop case to use as a test > fixture. We are planning to use your good work on the micro-desktop > case to our advantage and connect the VGA cable to the micro-desktop > VGA connector in order to see that the EOMA68 RGBTTL (with EDID) works > as advertised! > > > All you really need for this is a laptop PCMCIA or CardBus card cage, an > > IDE cable or two, a couple 4051s and toggle switches on a slice of > > perfboard, a 9v battery with connector and switch, and a cheap USB logic > > analyzer attached to a laptop. You use the 4051s, switched manually, and > > powered by the 9v battery, to act as input expanders for the logic > > analyzer. Each 4015 turns one channel into eight and requires three > "on-on" > > switches -- with one "on" wired to +9v, one to ground, and the common to > > the chip. You use the IDE cable for the wires ;) If you hook it up so > that > > you have one 4051 mux per logic analyzer channel, that'll give you 128 > (!) > > channels to switch with -- most USB logic analyzers, even the super cheap > > ones, are 16-channel... > > > > Heck, if you wanted to make the circuit "complicated" -- I could draw up > > something that automatically iterated through the channels for you at the > > press of a single button, switching at variable speed with a pot, a 555, > a > > resistor and cap, and a couple 4017s and 4051s. You'd only need /one/ > > channel for that -- so you could even use an o-scope there. Heck, I could > > do it with that circuit and my old, old Tektronix 422... > > > > I'm honestly surprised that this sort of idea hasn't been mentioned yet. > > That is a cool way to set up a very wide logic analyzer. We were > planning to use a little specialized hardware and less elbow grease to > make our test fixture: > * USB devices connected to the micro-desktop case USB ports, > * SD peripheral connected to the micro SD slot, > * VGA monitor connected to the VGA connector, > * serial terminal connected to the UART pins in expansion header > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From richard.wilbur at gmail.com Fri Mar 2 00:19:44 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Thu, 1 Mar 2018 17:19:44 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 5:10 PM, Christopher Havel wrote: > Posting from my phone while making dinner, so forgive that it's a top-post > plz. > > Testing via the micro desktop works as long as you've got a known good > micro desktop and your ports haven't won through. I think the 4051 idea > might be a little better - I've worn out USB ports before, just from using > them - ask me sometime about my mother's old VAIO laptop and how it > ultimately died... the only thing in my test rig to wear out is the card > cage... > > But, I'm not in charge, so I'll defer. You make very good points about connector fatigue. I was planning to leave everything connected and only install/remove the EOMA68 card from the micro-desktop case. That works as long as we don't need to test hot-plugging anything. To my knowledge we figured the hot-plug capability would likely be conferred by the applicable standard and thus were designing a basic functionality test. (Incidentally I have a dead VAIO laptop in which the power jack center pin broke. I really need to get that ordered and replaced.;>) From laserhawk64 at gmail.com Fri Mar 2 00:23:31 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Thu, 1 Mar 2018 19:23:31 -0500 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: Be careful... it was the replacing of the two ports on that old VGN-S360 that killed it... VAIOs are well known in repair circles for dying of heatstroke from even the slightest rework (and I was duly warned)... if it's a modular jack (on a cable, so no soldering), you'll be fine. If you need an iron... buy a board, not a port. Trust me. On Mar 1, 2018 7:20 PM, "Richard Wilbur" wrote: > On Thu, Mar 1, 2018 at 5:10 PM, Christopher Havel > wrote: > > Posting from my phone while making dinner, so forgive that it's a > top-post > > plz. > > > > Testing via the micro desktop works as long as you've got a known good > > micro desktop and your ports haven't won through. I think the 4051 idea > > might be a little better - I've worn out USB ports before, just from > using > > them - ask me sometime about my mother's old VAIO laptop and how it > > ultimately died... the only thing in my test rig to wear out is the card > > cage... > > > > But, I'm not in charge, so I'll defer. > > You make very good points about connector fatigue. I was planning to > leave everything connected and only install/remove the EOMA68 card > from the micro-desktop case. That works as long as we don't need to > test hot-plugging anything. To my knowledge we figured the hot-plug > capability would likely be conferred by the applicable standard and > thus were designing a basic functionality test. > > (Incidentally I have a dead VAIO laptop in which the power jack center > pin broke. I really need to get that ordered and replaced.;>) > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From lkcl at lkcl.net Fri Mar 2 01:57:20 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 2 Mar 2018 01:57:20 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Fri, Mar 2, 2018 at 12:10 AM, Christopher Havel wrote: > Posting from my phone while making dinner, so forgive that it's a top-post > plz. done but not for using the phone instead of enjoying dinner! :) > Testing via the micro desktop works as long as you've got a known good > micro desktop and your ports haven't won through. two separate tests strictly speaking needed, one is of Card(s), the other is of Micro-Desktop(s), one way to do that: known-good MD tests cards. known-good Card tests MDs. l. From lkcl at lkcl.net Fri Mar 2 03:30:14 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 2 Mar 2018 03:30:14 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Thu, Mar 1, 2018 at 9:46 PM, Richard Wilbur wrote: > It looks to me like the fastest way to test the GPIO lines connected > on the micro-desktop board to VESA_SCL and VESA_SDA would simply be to > connect a VGA monitor to the micro-desktop and make sure it is > properly detected and a test image looks right on it. yep, pretty much... with one slight fly in the ointment: the SCL and SDA lines are straight GPIO and will need a bit-banging I2C linux kernel driver. once that's configured, doing i2cdetect _should_ be enough to test the circuit, although scanning the data and running read-edid on it would be awesome and amazing: it would mean being able to *really* do proper VESA detection. l. From lkcl at lkcl.net Fri Mar 2 03:31:31 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 2 Mar 2018 03:31:31 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Thu, Mar 1, 2018 at 10:02 PM, Richard Wilbur wrote: > If we did decide to roll a v1.8 micro-desktop board, it would afford > us the opportunity to bring two of the presently unconnected GPIO18-21 > lines to the expansion header in place of VESA_SCL and VESA_SDA (which > are after all available on pins 15 and 12 of the VGA connector). If > VESA_SCL and VESA_SDA are more useful on the expansion header then, by > all means, forget this suggestion. the reason i brought those out is just in case someone decided they wanted to use them as plain GPIO. > The other option to accommodate all our GPIO goodness would be to > replace J5 (2x10 header) with a 2x11 or 2x12 header yep. > allowing us to > bring all the GPIO pins to the expansion header (the only difference > being whether we would prefer to retain VESA_SCL and VESA_SDA in the > header). yes. l. From lkcl at lkcl.net Fri Mar 2 03:40:01 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 2 Mar 2018 03:40:01 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 10:42 PM, Richard Wilbur wrote: > Luke, have you tested the D/A circuit on the micro-desktop board? yep it works great up to 1024x768. i haven't yet been able to get it to sync at anything greater than that, because you have to manually convert the signals into A20 timings... and of course if you can't read the EDID data you don't know *exactly* what the settings are in the first place for any given monitor. 1024x768, being a common VESA standard, has worked consistently on every monitor i've tried. > Only thing I would worry about is the hold time on the data lines. If > the A20 sets up the data quickly (relative to the pixel time) and > holds it until the next setup, we should be in good shape. sigh yeah i thought about that... using buffer ICs with a "hold", and linking up the clock line to it.... never got round to it. i'd prefer to just skip the entire circuit and use a TFP410 (or maybe it's a TFP401a), or a Chrontel RGB/TTL to VGA converter IC. CH7036 i think it is. >> Much easier suggestion: get a small LCD. *ANY* small LCD. Like a five or >> seven inch display at the largest. Raw panel, no driver board. Get the >> datasheet and a compatible connector. (If you source from eBay this is very >> easy; those are almost all commodity displays with available datasheets.) >> If it's a SMALL DISPLAY it *will* be RGBTTL, 90%+ of the time (I've seen >> one exception to this ever and it was in an off-brand portable DVD player). >> Wire it up. Wire it to the card connector. Add power. If you get a screen >> that works, you've done it right. > > I think this is why Luke put the display signals on the EOMA68 > standard in the RGBTTL format--to simplify the job of connecting to > LCD's. (I'm thinking of the laptop, tablet, gaming console, phone, > etc.) yyup. exactly. remember, you can't do more than one interface on any given set of pins, so i had to pick one (RGB/TTL or LVDS or MIPI or eDP), that then means you have to have a conversion IC in-place on the Card if a particular SoC doesn't *have* that interface... and many of the lower-cost SoCs don't because they're not part of the MIPI or DisplayPort cartel(s).... ... and even if you had LVDS, the cost on the other side (Housing side) of having an LVDS-to-RGB/TTL converter is so high relative to the cost of the LCD itself that companies would rebel and not bother with the standard at all. so, bizarrely, RGB/TTL, by being both "free" and also unencumbered by patents *and* by being lowest-common-denominator, wins out on all fronts. except for the fact that you need a 125mhz clock-rate for 1920x1080 at 60fps, which is a bit... high. but hey. l. From lkcl at lkcl.net Fri Mar 2 10:07:06 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 2 Mar 2018 10:07:06 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: btw i'm tempted to suggest treating the SPI pins as straight GPIO. if they can do 0 and 1 (input and output) then they're not short-circuited and/or disconnected and that's... good enough. l. From richard.wilbur at gmail.com Fri Mar 2 21:20:17 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Fri, 2 Mar 2018 14:20:17 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: <526092BC-055F-42F5-B57B-EA84213128DA@gmail.com> On Mar 1, 2018, at 17:23, Christopher Havel wrote: > > Be careful... it was the replacing of the two ports on that old VGN-S360 > that killed it... VAIOs are well known in repair circles for dying of > heatstroke from even the slightest rework (and I was duly warned)... if > it's a modular jack (on a cable, so no soldering), you'll be fine. If you > need an iron... buy a board, not a port. Trust me. I'll have to open the case and take a look to see what the particulars of the situation are. Thank you for sounding the voice of experience. I consider myself forewarned. From richard.wilbur at gmail.com Fri Mar 2 22:26:32 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Fri, 2 Mar 2018 15:26:32 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> On Mar 1, 2018, at 20:30, Luke Kenneth Casson Leighton wrote: Sent from my iPhone > On Thu, Mar 1, 2018 at 9:46 PM, Richard Wilbur wrote: >> It looks to me like the fastest way to test the GPIO lines connected >> on the micro-desktop board to VESA_SCL and VESA_SDA would simply be to >> connect a VGA monitor to the micro-desktop and make sure it is >> properly detected and a test image looks right on it. > > yep, pretty much... with one slight fly in the ointment: the SCL and > SDA lines are straight GPIO and will need a bit-banging I2C linux > kernel driver. once that's configured, doing i2cdetect _should_ be > enough to test the circuit, although scanning the data and running > read-edid on it would be awesome and amazing: it would mean being able > to *really* do proper VESA detection. Is someone already working on that? Sounds like we need the device tree for the micro-desktop to be populated. If we did it for micro-desktop v1.7 it would be something to build off for micro-desktop v1.8 and also a good place to begin for the laptop. From what I'm hearing, once the device tree is ready we could work on "automagically" configuring the VESA DDC driver to bit-bang the correct GPIO pins. Does the bit-banging VESA DDC driver exist already? (I wrote a bit-banging I2C driver in VxWorks at a previous position so the topic is not foreign.) If none of this is underway I'll continue mapping things out so we can create the device tree for the micro-desktop. If I remember correctly we also should create a device tree for the DS-113 v2.7.4 and v2.7.5? I'd be happy to work on that if you think that is the highest priority right now. It sounds like it will help both testing and deployment. From j.neuschaefer at gmx.net Fri Mar 2 22:43:42 2018 From: j.neuschaefer at gmx.net (Jonathan =?utf-8?Q?Neusch=C3=A4fer?=) Date: Fri, 2 Mar 2018 23:43:42 +0100 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> Message-ID: <20180302224342.gfr2i76rxyu6btaw@latitude> Hello, On Fri, Mar 02, 2018 at 03:26:32PM -0700, Richard Wilbur wrote: > On Mar 1, 2018, at 20:30, Luke Kenneth Casson Leighton wrote: > > > Sent from my iPhone > > On Thu, Mar 1, 2018 at 9:46 PM, Richard Wilbur wrote: > >> It looks to me like the fastest way to test the GPIO lines connected > >> on the micro-desktop board to VESA_SCL and VESA_SDA would simply be to > >> connect a VGA monitor to the micro-desktop and make sure it is > >> properly detected and a test image looks right on it. > > > > yep, pretty much... with one slight fly in the ointment: the SCL and > > SDA lines are straight GPIO and will need a bit-banging I2C linux > > kernel driver. once that's configured, doing i2cdetect _should_ be > > enough to test the circuit, although scanning the data and running > > read-edid on it would be awesome and amazing: it would mean being able > > to *really* do proper VESA detection. > > Is someone already working on that? Sounds like we need the device > tree for the micro-desktop to be populated. If we did it for > micro-desktop v1.7 it would be something to build off for > micro-desktop v1.8 and also a good place to begin for the laptop. I'm not actively working on any of this, but I'm interested in the devicetree side of things. > From what I'm hearing, once the device tree is ready we could work on > "automagically" configuring the VESA DDC driver to bit-bang the > correct GPIO pins. Does the bit-banging VESA DDC driver exist already? > (I wrote a bit-banging I2C driver in VxWorks at a previous position so > the topic is not foreign.) Mainline Linux has a driver[1] for I2C-over-GPIO. It's been there since 2.6.22, albeit initially without devicetree support, which came in 3.4. There's also a generic devicetree binding[2] for I2C-over-GPIO in Linux's Documentation/devicetree/bindings directory. > If none of this is underway I'll continue mapping things out so we can > create the device tree for the micro-desktop. If I remember correctly > we also should create a device tree for the DS-113 v2.7.4 and v2.7.5? What is DS-113? > I'd be happy to work on that if you think that is the highest priority > right now. It sounds like it will help both testing and deployment. Thanks, Jonathan Neuschäfer [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/i2c/busses/i2c-gpio.c [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/i2c/i2c-gpio.txt From richard.wilbur at gmail.com Fri Mar 2 22:52:46 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Fri, 2 Mar 2018 15:52:46 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: On Mar 1, 2018, at 20:31, Luke Kenneth Casson Leighton wrote: > On Thu, Mar 1, 2018 at 10:02 PM, Richard Wilbur > wrote: >> If we did decide to roll a v1.8 micro-desktop board, it would afford >> us the opportunity to bring two of the presently unconnected GPIO18-21 >> lines to the expansion header in place of VESA_SCL and VESA_SDA (which >> are after all available on pins 15 and 12 of the VGA connector). If >> VESA_SCL and VESA_SDA are more useful on the expansion header then, by >> all means, forget this suggestion. > > the reason i brought those out is just in case someone decided they > wanted to use them as plain GPIO. Having the most available GPIO pins sounds like a great goal for the micro-desktop. But at the expense of a fully operational VGA interface when we have four more GPIO pins that we could choose from--seems like maybe we could take a better tradeoff? >> The other option to accommodate all our GPIO goodness would be to >> replace J5 (2x10 header) with a 2x11 or 2x12 header > > yep. I would vote for a 2x11 header with the four other GPIO's connected and the VESA DDC lines not connected to the expansion header. That only requires the expansion to extend in length by 10%. From richard.wilbur at gmail.com Fri Mar 2 23:24:12 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Fri, 2 Mar 2018 16:24:12 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: <8C12CC29-B5F0-4273-920C-32C545E94301@gmail.com> On Mar 1, 2018, at 20:40, Luke Kenneth Casson Leighton wrote: > > On Thu, Mar 1, 2018 at 10:42 PM, Richard Wilbur > wrote: > >> Luke, have you tested the D/A circuit on the micro-desktop board? > > yep it works great up to 1024x768. i haven't yet been able to get it > to sync at anything greater than that, because you have to manually > convert the signals into A20 timings... and of course if you can't > read the EDID data you don't know *exactly* what the settings are in > the first place for any given monitor. > > 1024x768, being a common VESA standard, has worked consistently on > every monitor i've tried. So if we could read the EDID the driver would figure out the A20 timings? Does the A20 already have a graphics driver capable of that? (In which case the bit-banging VESA DDC driver becomes very important.) How much of this infrastructure already exists? I'm bringing my tools, where do we start building? I have a collection of VGA monitors with different aspect ratios and sizes (3 CRT and 3 LCD). I'd be happy to test resolutions above and below 1024x768. >> Only thing I would worry about is the hold time on the data lines. If >> the A20 sets up the data quickly (relative to the pixel time) and >> holds it until the next setup, we should be in good shape. > > sigh yeah i thought about that... using buffer ICs with a "hold", and > linking up the clock line to it.... never got round to it. i'd prefer > to just skip the entire circuit and use a TFP410 (or maybe it's a > TFP401a), or a Chrontel RGB/TTL to VGA converter IC. CH7036 i think > it is. Are you thinking of octal D flip-flops? I'll have to look up those datasheets. What do those chips offer over the flip-flops? How do the prices compare? […] > yyup. exactly. remember, you can't do more than one interface on > any given set of pins, so i had to pick one (RGB/TTL or LVDS or MIPI > or eDP), that then means you have to have a conversion IC in-place on > the Card if a particular SoC doesn't *have* that interface... and many > of the lower-cost SoCs don't because they're not part of the MIPI or > DisplayPort cartel(s).... Yes, that's the awful thing about so many industry standards: you can't get the text without signing documents and paying a handsome price, you can't use them without paying royalties to the patent owners. > ... and even if you had LVDS, the cost on the other side (Housing > side) of having an LVDS-to-RGB/TTL converter is so high relative to > the cost of the LCD itself that companies would rebel and not bother > with the standard at all. > > so, bizarrely, RGB/TTL, by being both "free" and also unencumbered by > patents *and* by being lowest-common-denominator, wins out on all > fronts. except for the fact that you need a 125mhz clock-rate for > 1920x1080 at 60fps, which is a bit... high. but hey. Will the A20 clock the RGBTTL interface that high? From richard.wilbur at gmail.com Fri Mar 2 23:25:50 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Fri, 2 Mar 2018 16:25:50 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: Message-ID: <82416028-CE11-42F2-B635-78C65A7E564B@gmail.com> On Mar 2, 2018, at 03:07, Luke Kenneth Casson Leighton wrote: > > btw i'm tempted to suggest treating the SPI pins as straight GPIO. if > they can do 0 and 1 (input and output) then they're not > short-circuited and/or disconnected and that's... good enough. So test them as GPIO for now? From lkcl at lkcl.net Sat Mar 3 01:31:08 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 01:31:08 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Fri, Mar 2, 2018 at 10:26 PM, Richard Wilbur wrote: > On Mar 1, 2018, at 20:30, Luke Kenneth Casson Leighton wrote: > > > Sent from my iPhone >> On Thu, Mar 1, 2018 at 9:46 PM, Richard Wilbur wrote: >>> It looks to me like the fastest way to test the GPIO lines connected >>> on the micro-desktop board to VESA_SCL and VESA_SDA would simply be to >>> connect a VGA monitor to the micro-desktop and make sure it is >>> properly detected and a test image looks right on it. >> >> yep, pretty much... with one slight fly in the ointment: the SCL and >> SDA lines are straight GPIO and will need a bit-banging I2C linux >> kernel driver. once that's configured, doing i2cdetect _should_ be >> enough to test the circuit, although scanning the data and running >> read-edid on it would be awesome and amazing: it would mean being able >> to *really* do proper VESA detection. > > Is someone already working on that? no. > Sounds like we need the device tree for the micro-desktop to be > populated. patches to linux mainline are needed to include the ability to have devicetree fragments before that can happen. however... the A20 linux sunxi mainline source is *not 100% functional* so it's kinda moot. > If we did it for micro-desktop v1.7 it would be something to build off for micro-desktop v1.8 and also a good place to begin for the laptop. > > From what I'm hearing, once the device tree is ready we could work > on "automagically" configuring the VESA DDC driver to bit-bang the > correct GPIO pins. Does the bit-banging VESA DDC driver exist already? > (I wrote a bit-banging I2C driver in VxWorks at a previous position so > the topic is not foreign.) i found a random driver somewhere for I2C - don't know about the linking to userspace / DDC. > If none of this is underway I'll continue mapping things out so we can > create the device tree for the micro-desktop. that would be useful... *if* the A20 linux sunxi mainline source supports 100% of the functionality of an A20. > If I remember correctly we also should create a device tree for the DS-113 v2.7.4 and v2.7.5? again same caveat.... *thinks*.... yeah. > I'd be happy to work on that if you think that is the highest > priority right now. It sounds like it will help both testing and > deployment. let me think... two conditional things need to happen: 1. the A20 sunxi mainline code needs to have 100% functionality support for ALL hardware 2. the "devicetree fragment" patch also needs to be confirmed as mainline. then it's useful. l. From lkcl at lkcl.net Sat Mar 3 01:36:08 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 01:36:08 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <8C12CC29-B5F0-4273-920C-32C545E94301@gmail.com> References: <8C12CC29-B5F0-4273-920C-32C545E94301@gmail.com> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Fri, Mar 2, 2018 at 11:24 PM, Richard Wilbur wrote: > On Mar 1, 2018, at 20:40, Luke Kenneth Casson Leighton wrote: >> >> On Thu, Mar 1, 2018 at 10:42 PM, Richard Wilbur >> wrote: >> >>> Luke, have you tested the D/A circuit on the micro-desktop board? >> >> yep it works great up to 1024x768. i haven't yet been able to get it >> to sync at anything greater than that, because you have to manually >> convert the signals into A20 timings... and of course if you can't >> read the EDID data you don't know *exactly* what the settings are in >> the first place for any given monitor. >> >> 1024x768, being a common VESA standard, has worked consistently on >> every monitor i've tried. > > So if we could read the EDID the driver would figure out the A20 timings? no. some code is needed to *translate* EDID into A20 timings. > Does the A20 already have a graphics driver capable of that? no. the general assumption is that RGB/TTL is used for *fixed* size LCDs. therefore why on earth, the logic goes, would you put a dynamic EDID bridge in place? > (In which case the bit-banging VESA DDC driver becomes very > important.) How much of this infrastructure already exists? bits and pieces. mainly it's integration. > I'm bringing my tools, where do we start building? :) > I have a collection of VGA monitors with different aspect ratios > and sizes (3 CRT and 3 LCD). I'd be happy to test resolutions above > and below 1024x768. yay. >>> Only thing I would worry about is the hold time on the data lines. If >>> the A20 sets up the data quickly (relative to the pixel time) and >>> holds it until the next setup, we should be in good shape. >> >> sigh yeah i thought about that... using buffer ICs with a "hold", and >> linking up the clock line to it.... never got round to it. i'd prefer >> to just skip the entire circuit and use a TFP410 (or maybe it's a >> TFP401a), or a Chrontel RGB/TTL to VGA converter IC. CH7036 i think >> it is. > > Are you thinking of octal D flip-flops? I'll have to look up those datasheets. What do those chips offer over the flip-flops? How do the prices compare? no idea haven't investigtated. > […] >> yyup. exactly. remember, you can't do more than one interface on >> any given set of pins, so i had to pick one (RGB/TTL or LVDS or MIPI >> or eDP), that then means you have to have a conversion IC in-place on >> the Card if a particular SoC doesn't *have* that interface... and many >> of the lower-cost SoCs don't because they're not part of the MIPI or >> DisplayPort cartel(s).... > > Yes, that's the awful thing about so many industry standards: > you can't get the text without signing documents and paying > a handsome price, you can't use them without paying royalties > to the patent owners. ... which is why i'm not putting CAN bus into any of the libre-riscv SoCs... >> ... and even if you had LVDS, the cost on the other side (Housing >> side) of having an LVDS-to-RGB/TTL converter is so high relative to >> the cost of the LCD itself that companies would rebel and not bother >> with the standard at all. >> >> so, bizarrely, RGB/TTL, by being both "free" and also unencumbered by >> patents *and* by being lowest-common-denominator, wins out on all >> fronts. except for the fact that you need a 125mhz clock-rate for >> 1920x1080 at 60fps, which is a bit... high. but hey. > > Will the A20 clock the RGBTTL interface that high? yes. despite the fuckwits in the marketing department in *competing divisions* inside allwinner trying to tell the world otherwise. they've dumbed down the public marketted specification of the A20 to 1024x768 because its capabilities for the price were making other offerings look *really* bad. l. From lkcl at lkcl.net Sat Mar 3 01:37:15 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 01:37:15 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <20180302224342.gfr2i76rxyu6btaw@latitude> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: On Fri, Mar 2, 2018 at 10:43 PM, Jonathan Neuschäfer wrote: > Hello, > > I'm not actively working on any of this, but I'm interested in the > devicetree side of things. excellent, can you look up the status of A20 and the devicetree fragments? > What is DS-113? board design codename. From j.neuschaefer at gmx.net Sat Mar 3 02:44:48 2018 From: j.neuschaefer at gmx.net (Jonathan =?utf-8?Q?Neusch=C3=A4fer?=) Date: Sat, 3 Mar 2018 03:44:48 +0100 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: <20180303024448.xoverpx4ipl7tbo5@latitude> On Sat, Mar 03, 2018 at 01:37:15AM +0000, Luke Kenneth Casson Leighton wrote: > On Fri, Mar 2, 2018 at 10:43 PM, Jonathan Neuschäfer > wrote: > > Hello, > > > > > I'm not actively working on any of this, but I'm interested in the > > devicetree side of things. > > excellent, can you look up the status of A20 and the devicetree fragments? There has been some work on HDMI on the A10/A20 in October (merged in 4.15): https://www.spinics.net/lists/devicetree/msg198941.html Reportedly, HDMI works now. I'm not sure what else was/is missing. About DT fragments: I'm not sure what you mean exactly. Mainline support devicetree overlays which should do (half of) the job for EOMA68, though. The tricky part would be figuring out how the same overlay can be used on base devicetrees for different SoCs, as the exposed busses will have different names. This may be solved by a future iteration of this patchset: https://www.spinics.net/lists/kernel/msg2710913.html The other side of the DT job is the dynamic loading of a devicetree overlay based on the EEPROM of the connected housing. Mainline Linux doesn't have something like capemgr[1], AFAIK; But I think this could also be handled in userspace. And then there's the question of how the kernel/userspace is supposed to know on which i2c bus it finds the EOMA68 EEPROM. Jonathan [1]: https://elinux.org/Capemgr From lkcl at lkcl.net Sat Mar 3 03:46:33 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 03:46:33 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <20180303024448.xoverpx4ipl7tbo5@latitude> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180303024448.xoverpx4ipl7tbo5@latitude> Message-ID: On Sat, Mar 3, 2018 at 2:44 AM, Jonathan Neuschäfer wrote: >> excellent, can you look up the status of A20 and the devicetree fragments? > > There has been some work on HDMI on the A10/A20 in October (merged in > 4.15): https://www.spinics.net/lists/devicetree/msg198941.html > Reportedly, HDMI works now. I'm not sure what else was/is missing. the *full* hardware set is needed. except for NAND, which has been removed from 2.7.5. > About DT fragments: I'm not sure what you mean exactly. Mainline support > devicetree overlays which should do (half of) the job for EOMA68, though. ah, yes, that's the official name. overlays. question: do you know if they've added the patches to *REMOVE* overlays yet? Cards could potentially be dynamically removed... or at least put into sleep / suspend only to wake up with a totally different Housing. > The tricky part would be figuring out how the same overlay can be used > on base devicetrees for different SoCs, as the exposed busses will have > different names. ... i'm not sure what you're referring to, here. do you have an example? > This may be solved by a future iteration of this > patchset: https://www.spinics.net/lists/kernel/msg2710913.html there's nothing mentioned about what standard he's referring to. is there a link, or can you do a quick summary? > The other side of the DT job is the dynamic loading of a devicetree > overlay based on the EEPROM of the connected housing. yes that's correct. > Mainline Linux > doesn't have something like capemgr[1], AFAIK; But I think this could > also be handled in userspace. it has to be handled in u-boot (at least partially). > And then there's the question of how the kernel/userspace is supposed to > know on which i2c bus it finds the EOMA68 EEPROM. good point. the bus utilised will be in the Card's devicetree file: it will have to be named as such. l. From richard.wilbur at gmail.com Sat Mar 3 07:11:02 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Sat, 3 Mar 2018 00:11:02 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <20180302224342.gfr2i76rxyu6btaw@latitude> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: On Mar 2, 2018, at 15:43, Jonathan Neuschäfer wrote: >> On Fri, Mar 02, 2018 at 03:26:32PM -0700, Richard Wilbur wrote: […] > I'm not actively working on any of this, but I'm interested in the > devicetree side of things. To what does your interest in devicetree extend? Are you interested in helping create the devicetree mappings for EOMA68 hardware, or following developments, etc. How would you like to be involved? Thank you for imparting your knowledge below. >> From what I'm hearing, once the device tree is ready we could work on >> "automagically" configuring the VESA DDC driver to bit-bang the >> correct GPIO pins. Does the bit-banging VESA DDC driver exist already? >> (I wrote a bit-banging I2C driver in VxWorks at a previous position so >> the topic is not foreign.) > > Mainline Linux has a driver[1] for I2C-over-GPIO. It's been there since > 2.6.22, albeit initially without devicetree support, which came in 3.4. > > There's also a generic devicetree binding[2] for I2C-over-GPIO in > Linux's Documentation/devicetree/bindings directory. That's wonderful news! So with the devicetree for the micro-desktop we should be able to setup the I2C driver. Next question: has anyone created a VESA DDC driver that will get the EDID from any connected monitor given when given access to an I2C device (as exposed by our bit-banging I2C driver). VESA DDC is a standard that specifies a protocol on top of I2C for obtaining monitor information (supported resolutions and timings, color gamma, etc.). >> If none of this is underway I'll continue mapping things out so we can >> create the device tree for the micro-desktop. If I remember correctly >> we also should create a device tree for the DS-113 v2.7.4 and v2.7.5? > > What is DS-113? EOMA68-A20 the CPU card. From lkcl at lkcl.net Sat Mar 3 08:04:56 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 08:04:56 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Sat, Mar 3, 2018 at 7:11 AM, Richard Wilbur wrote: > VESA DDC is a standard that specifies a protocol on top of I2C for obtaining monitor information (supported resolutions and timings, color gamma, etc.). https://stackoverflow.com/questions/5065159/reading-edid-from-eeprom so the relevant linux kernel video driver is capable of reading EDID data... the thing that i know for a fact will be missing is that nobody will have done EDID-to-A20 video timings conversion in the linux kernel. it's all hard-coded. l. From pablo at parobalth.org Sat Mar 3 19:53:36 2018 From: pablo at parobalth.org (Pablo Rath) Date: Sat, 3 Mar 2018 20:53:36 +0100 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180303024448.xoverpx4ipl7tbo5@latitude> Message-ID: <20180303195336.n3upz22ywgiubfob@cherry> On Sat, Mar 03, 2018 at 03:46:33AM +0000, Luke Kenneth Casson Leighton wrote: > On Sat, Mar 3, 2018 at 2:44 AM, Jonathan Neuschäfer > wrote: ... > > About DT fragments: I'm not sure what you mean exactly. Mainline support > > devicetree overlays which should do (half of) the job for EOMA68, though. > > ah, yes, that's the official name. overlays. > > question: do you know if they've added the patches to *REMOVE* > overlays yet? Cards could potentially be dynamically removed... or at > least put into sleep / suspend only to wake up with a totally > different Housing. The whole DT overlay discussion rang a bell somewhere in my brain and now I had time to look it up. I read about a DT overlay hack here: https://joeyh.name/blog/entry/easy-peasy-devicetree-squeezy/ Please read the README for details. Can the above hack be of some use to the EOMA project until "patches to fully support overlays, including loading them on the fly into a running system" are mainlined? I can not answer the question myself because I am not into DT overlays. It seems we are not the only one with DT overlay problems: https://elinux.org/BeagleBone_and_the_3.8_Kernel#Cape_Manager_and_Device_Tree_Overlays kind regards Pablo From lkcl at lkcl.net Sat Mar 3 20:11:27 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Mar 2018 20:11:27 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <20180303195336.n3upz22ywgiubfob@cherry> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180303024448.xoverpx4ipl7tbo5@latitude> <20180303195336.n3upz22ywgiubfob@cherry> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Sat, Mar 3, 2018 at 7:53 PM, Pablo Rath wrote: > On Sat, Mar 03, 2018 at 03:46:33AM +0000, Luke Kenneth Casson Leighton wrote: >> On Sat, Mar 3, 2018 at 2:44 AM, Jonathan Neuschäfer >> wrote: > > ... > >> > About DT fragments: I'm not sure what you mean exactly. Mainline support >> > devicetree overlays which should do (half of) the job for EOMA68, though. >> >> ah, yes, that's the official name. overlays. >> >> question: do you know if they've added the patches to *REMOVE* >> overlays yet? Cards could potentially be dynamically removed... or at >> least put into sleep / suspend only to wake up with a totally >> different Housing. > > The whole DT overlay discussion rang a bell somewhere in my brain and > now I had time to look it up. I read about a DT overlay hack here: > https://joeyh.name/blog/entry/easy-peasy-devicetree-squeezy/ > Please read the README for details. > Can the above hack be of some use to the EOMA project until "patches to > fully support overlays, including loading them on the fly into a running > system" are mainlined? as long as people are happy to have the linux kernel source tarball on their system... yes. and they are happy not to have 100% working hardware. > It seems we are not the only one with DT overlay problems: > https://elinux.org/BeagleBone_and_the_3.8_Kernel#Cape_Manager_and_Device_Tree_Overlays yyup. and they don't have the dynamic removal. l. From j.neuschaefer at gmx.net Sun Mar 4 00:20:39 2018 From: j.neuschaefer at gmx.net (Jonathan =?utf-8?Q?Neusch=C3=A4fer?=) Date: Sun, 4 Mar 2018 01:20:39 +0100 Subject: [Arm-netbook] Devicetree (was: Re: Testing: GPIO) In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180303024448.xoverpx4ipl7tbo5@latitude> Message-ID: <20180304002039.anscrvymhxzngnvo@latitude> On Sat, Mar 03, 2018 at 03:46:33AM +0000, Luke Kenneth Casson Leighton wrote: > On Sat, Mar 3, 2018 at 2:44 AM, Jonathan Neuschäfer > wrote: > > >> excellent, can you look up the status of A20 and the devicetree fragments? > > > > There has been some work on HDMI on the A10/A20 in October (merged in > > 4.15): https://www.spinics.net/lists/devicetree/msg198941.html > > Reportedly, HDMI works now. I'm not sure what else was/is missing. > > the *full* hardware set is needed. except for NAND, which has been > removed from 2.7.5. I understand. But I'm not deeply familiar with the A20 and don't have one around for testing, so I don't now how close we are to that goal. > > About DT fragments: I'm not sure what you mean exactly. Mainline support > > devicetree overlays which should do (half of) the job for EOMA68, though. Turns out I was wrong about this. Mainline supports the bare core functionality to apply/unapply DT overlays, but it doesn't expose this functionality to userspace. > ah, yes, that's the official name. overlays. > > question: do you know if they've added the patches to *REMOVE* > overlays yet? Cards could potentially be dynamically removed... or at > least put into sleep / suspend only to wake up with a totally > different Housing. > > > The tricky part would be figuring out how the same overlay can be used > > on base devicetrees for different SoCs, as the exposed busses will have > > different names. > > ... i'm not sure what you're referring to, here. do you have an example? Let's say you have an expansion board that connects to a pair of UART pins and has a bluetooth module on it (simplifying here, because EOMA68 is more complex than necessary for this example). On A20 you might want to use the UART controller at 0x1c28800 (just an example), which has the label uart2. But on RK3399 you might want to use the UART at 0xff180000, labeled uart0. Now the overlay for A20 would look something like this: /plugin/; / { ... fragment at 0 { target = <&uart2>; __overlay__ { bluetooth { compatible = "brcm,bcm43438-bt"; max-speed = <921600>; }; }; }; }; But for RK3399, you'd have to change that to target = <&uart0>. > > This may be solved by a future iteration of this > > patchset: https://www.spinics.net/lists/kernel/msg2710913.html > > there's nothing mentioned about what standard he's referring to. is > there a link, or can you do a quick summary? This snippet? "It would be good to have a way to expose #-cells types of providers through a connector in a standard way." (That's the only occurence of "standard" on that page) This work is about coming up with a convention (a "standard" in the general sense) to express the remapping of, at first, GPIOs in DT to give them consistent names from the point of view of an expansion interface. Or did you mean something else by "what standard he's referring to"? > > The other side of the DT job is the dynamic loading of a devicetree > > overlay based on the EEPROM of the connected housing. > > yes that's correct. > > > Mainline Linux > > doesn't have something like capemgr[1], AFAIK; But I think this could > > also be handled in userspace. > > it has to be handled in u-boot (at least partially). Why in u-boot? u-boot won't be around to do something when the card is hot-plugged, right? > > And then there's the question of how the kernel/userspace is supposed to > > know on which i2c bus it finds the EOMA68 EEPROM. > > good point. the bus utilised will be in the Card's devicetree file: > it will have to be named as such. (Somewhat answering my own question:) I guess there could be something like a DT node that collects all the pieces that go into the EOMA68 interface: eoma68: connector { compatible = "eoma,eoma68-connector"; gpios = <&gpio0 0 &gpio2 14 ... >; i2c = <&i2c3>; spi = <&spi4>; ... }; And then the overlays could be written to always attach to &eoma68. Thanks, Jonathan --- Because it might be of interest to some who aren't familiar with devicetree source syntax, let's go over that example line by line: eoma68: connector { A new node is defined. Nodes start with a name and an opening curly bracket. The node is named "connector". Its path would be /.../connector, depending on the names of the outer nodes. A label "eoma68" points at this node, so the node can be referenced in other places without using the full path. compatible = "eoma,eoma68-connector"; This is a property of the connector node. It only has a name and a value. Every property and node is terminated with a semicolon. The "compatible" property tells the OS that the device described by this node is an EOMA68 connector, and allows the right drivers to be selected. The syntax of a "compatible" string is vendor,device. i2c = <&i2c3>; This property's value is a "phandle" (property handle). It points at the node with the label "i2c3". gpios = <&gpio0 0 &gpio2 14 ... >; In this property, we see pairs of data: A phandle pointing at a GPIO controller, and a plain number, representing the GPIO pin within that GPIO controller. spi = <&spi4>; ... More properties. }; This is the end of the node. The meaning of all these properties is specified in a devicetree "binding". An example of a DT binding is that of the Allwinner A10's SPI controller, which is also used in the A20: https://www.kernel.org/doc/Documentation/devicetree/bindings/spi/spi-sun4i.txt There's also a devicetree specification, available at: https://www.devicetree.org/specifications/ From j.neuschaefer at gmx.net Sun Mar 4 01:11:22 2018 From: j.neuschaefer at gmx.net (Jonathan =?utf-8?Q?Neusch=C3=A4fer?=) Date: Sun, 4 Mar 2018 02:11:22 +0100 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: <20180304011122.i67getrx7cnlycnp@latitude> On Sat, Mar 03, 2018 at 12:11:02AM -0700, Richard Wilbur wrote: > On Mar 2, 2018, at 15:43, Jonathan Neuschäfer wrote: > >> On Fri, Mar 02, 2018 at 03:26:32PM -0700, Richard Wilbur wrote: > > […] > > I'm not actively working on any of this, but I'm interested in the > > devicetree side of things. > > To what does your interest in devicetree extend? Are you interested > in helping create the devicetree mappings for EOMA68 hardware, or > following developments, etc. How would you like to be involved? > Thank you for imparting your knowledge below. Following the development and discussions like this, and offering some input every now and then. I might also write some small kernel patches. > >> From what I'm hearing, once the device tree is ready we could work on > >> "automagically" configuring the VESA DDC driver to bit-bang the > >> correct GPIO pins. Does the bit-banging VESA DDC driver exist already? > >> (I wrote a bit-banging I2C driver in VxWorks at a previous position so > >> the topic is not foreign.) > > > > Mainline Linux has a driver[1] for I2C-over-GPIO. It's been there since > > 2.6.22, albeit initially without devicetree support, which came in 3.4. > > > > There's also a generic devicetree binding[2] for I2C-over-GPIO in > > Linux's Documentation/devicetree/bindings directory. > > That's wonderful news! So with the devicetree for the micro-desktop we should be able to setup the I2C driver. Next question: has anyone created a VESA DDC driver that will get the EDID from any connected monitor given when given access to an I2C device (as exposed by our bit-banging I2C driver). > > VESA DDC is a standard that specifies a protocol on top of I2C for obtaining monitor information (supported resolutions and timings, color gamma, etc.). > > >> If none of this is underway I'll continue mapping things out so we can > >> create the device tree for the micro-desktop. If I remember correctly > >> we also should create a device tree for the DS-113 v2.7.4 and v2.7.5? The devicetree for the CPU card should be relatively straight-forward, at least the parts that don't involve the EOMA68 connector. And for the connector and what's beyond, I see two solutions: Short-term solution: Write an A20-specific DT snippet, either as an overlay that's loaded by u-boot (this will preclude hot-plug of the card into a different housing). Long-term solution: Work with the mainline kernel folks to create something that lets us represent SoC-independent connectors properly, and also implement DTo loading based on the config EEPROM. > > > > What is DS-113? > > EOMA68-A20 the CPU card. Aaah! Thanks. Jonathan From lkcl at lkcl.net Sun Mar 4 03:28:46 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 4 Mar 2018 03:28:46 +0000 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: <20180304011122.i67getrx7cnlycnp@latitude> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180304011122.i67getrx7cnlycnp@latitude> Message-ID: On Sun, Mar 4, 2018 at 1:11 AM, Jonathan Neuschäfer wrote: > Long-term solution: > Work with the mainline kernel folks to create something that lets us > represent SoC-independent connectors properly, and also implement DTo > loading based on the config EEPROM. beaglebone looks like they've implemented something like this already. question is, is it already in u-boot as well. l. From lkcl at lkcl.net Sun Mar 4 03:41:39 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 4 Mar 2018 03:41:39 +0000 Subject: [Arm-netbook] Devicetree (was: Re: Testing: GPIO) In-Reply-To: <20180304002039.anscrvymhxzngnvo@latitude> References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> <20180303024448.xoverpx4ipl7tbo5@latitude> <20180304002039.anscrvymhxzngnvo@latitude> Message-ID: On Sun, Mar 4, 2018 at 12:20 AM, Jonathan Neuschäfer wrote: > Let's say you have an expansion board that connects to a pair of UART > pins and has a bluetooth module on it (simplifying here, because EOMA68 > is more complex than necessary for this example). > > On A20 you might want to use the UART controller at 0x1c28800 (just an > example), which has the label uart2. But on RK3399 you might want to use > the UART at 0xff180000, labeled uart0. Now the overlay for A20 would > look something like this: > > /plugin/; > / { > ... > > fragment at 0 { > target = <&uart2>; > __overlay__ { > bluetooth { > compatible = "brcm,bcm43438-bt"; > max-speed = <921600>; > }; > }; > }; > }; > > But for RK3399, you'd have to change that to target = <&uart0>. ok so i thought that was taken care of by using numerical ids. you'd then place the main device-tree definition in a reference. i'm sure this can be taken care of by having a definition of EOMA68 (by name) where things like <&uart0> are placed *into* that definition on a per-CPU (per Card) basis. so you would just have different "implementations" of the EOMA68 Standard pinouts in each Card dtb. >> > This may be solved by a future iteration of this >> > patchset: https://www.spinics.net/lists/kernel/msg2710913.html >> >> there's nothing mentioned about what standard he's referring to. is >> there a link, or can you do a quick summary? > > This snippet? "It would be good to have a way to expose #-cells > types of providers through a connector in a standard way." > (That's the only occurence of "standard" on that page) > > This work is about coming up with a convention (a "standard" in the > general sense) to express the remapping of, at first, GPIOs in DT to > give them consistent names from the point of view of an expansion > interface. > > Or did you mean something else by "what standard he's referring to"? apologies, i just don't have any context... and damnit i have such a lot else i'm doing at the moment, on deadlines, that i'm not able to fully focus >> > The other side of the DT job is the dynamic loading of a devicetree >> > overlay based on the EEPROM of the connected housing. >> >> yes that's correct. >> >> > Mainline Linux >> > doesn't have something like capemgr[1], AFAIK; But I think this could >> > also be handled in userspace. >> >> it has to be handled in u-boot (at least partially). > > Why in u-boot? u-boot won't be around to do something when the card is > hot-plugged, right? u-boot may need to know that it has to pull up certain GPIOs in order to switch on the LCD so that people can choose what to do. it may need to know *that* there is an LCD (and what type). all that information can only be safely obtained by identifying the Housing through its EEPROM ID. >> > And then there's the question of how the kernel/userspace is supposed to >> > know on which i2c bus it finds the EOMA68 EEPROM. >> >> good point. the bus utilised will be in the Card's devicetree file: >> it will have to be named as such. > > (Somewhat answering my own question:) > I guess there could be something like a DT node that collects all the > pieces that go into the EOMA68 interface: > > eoma68: connector { > compatible = "eoma,eoma68-connector"; > gpios = <&gpio0 0 > &gpio2 14 > ... >; > i2c = <&i2c3>; > spi = <&spi4>; > ... > }; > > And then the overlays could be written to always attach to &eoma68. yes. that's how i imagined it would work. l. From richard.wilbur at gmail.com Sun Mar 4 03:55:17 2018 From: richard.wilbur at gmail.com (Richard Wilbur) Date: Sat, 3 Mar 2018 20:55:17 -0700 Subject: [Arm-netbook] Testing: GPIO In-Reply-To: References: <62945838-BAC9-4DF1-8EAA-C974292C82C9@gmail.com> <20180302224342.gfr2i76rxyu6btaw@latitude> Message-ID: On Sat, Mar 3, 2018 at 12:11 AM, Richard Wilbur wrote: > Has anyone created a VESA DDC driver that will get the EDID from any connected monitor given when given access to an I2C device (as exposed by our bit-banging I2C driver). > > VESA DDC is a standard that specifies a protocol on top of I2C for obtaining monitor information (supported resolutions and timings, color gamma, etc.). Looks like ddcutil[*] provides (reasonably up-to-date) VESA DDC functionality. Reference: [*] http://www.ddcutil.com/ From maillist_arm-netbook at aross.me Tue Mar 6 16:24:45 2018 From: maillist_arm-netbook at aross.me (Alexander Ross) Date: Tue, 6 Mar 2018 16:24:45 +0000 Subject: [Arm-netbook] OT: China 12 LED RGBW Par Firmware? Message-ID: <18465bcf-20ae-ed5d-800c-37edf90130ff@aross.me> off topic, idk where else to ask. Photos and additional info about these 12 led rgbw par lights: https://joindiaspora.com/posts/d4c27f80d4c40135bebf0242ac110007 By chance would anyone know where firmware for these cheap china led pars might be? cus i have a few but they all have different firmware. i have 2 versions of the IR remote edition and a non-remote one. I might just get a few more of just single/same edition and be done with it but if i could flash them all with the same firmware that would be great! quite brilliant these little things. floss firmware would make huge potential :). Hoping someone in china might have a clue. I guess the firmware is proprietary or unknown dev but ever hopeful me is :). From laserhawk64 at gmail.com Mon Mar 12 20:44:13 2018 From: laserhawk64 at gmail.com (Christopher Havel) Date: Mon, 12 Mar 2018 16:44:13 -0400 Subject: [Arm-netbook] Well, this is interesting... Message-ID: Neither ARM nor netbook, strictly speaking, but we've veered off topic before, so gee why not... https://hackaday.com/2018/03/12/new-guts-make-old-thinkpads-new/ Looks vaguely relevant. Chris From desttinghimgame at gmail.com Mon Mar 12 22:04:45 2018 From: desttinghimgame at gmail.com (Louis Pearson) Date: Mon, 12 Mar 2018 17:04:45 -0500 Subject: [Arm-netbook] Well, this is interesting... In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 3:44 PM, Christopher Havel wrote: > Neither ARM nor netbook, strictly speaking, but we've veered off topic > before, so gee why not... > > https://hackaday.com/2018/03/12/new-guts-make-old-thinkpads-new/ > > Looks vaguely relevant. > > Chris > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk That is very impressive! I'd rather get the laptop chassis for the eoma68 for that price though. From mike.valk at gmail.com Tue Mar 13 09:35:19 2018 From: mike.valk at gmail.com (mike.valk at gmail.com) Date: Tue, 13 Mar 2018 10:35:19 +0100 Subject: [Arm-netbook] Fwd: Earth-friendly EOMA68 Computing Devices Update: FOSDEM Recap, RISC-V Update, and Certification Marks In-Reply-To: <30556267.20180312192142.5aa6d34619d4e9.11930436@mail187-23.suw11.mandrillapp.com> References: <30556267.20180312192142.5aa6d34619d4e9.11930436@mail187-23.suw11.mandrillapp.com> Message-ID: ---------- Forwarded message ---------- From: Crowd Supply Date: 2018-03-12 20:21 GMT+01:00 Subject: Earth-friendly EOMA68 Computing Devices Update: FOSDEM Recap, RISC-V Update, and Certification Marks To: mike.valk at gmail.com [image: Crowd Supply] An update from the Earth-friendly EOMA68 Computing Devices team. FOSDEM Recap, RISC-V Update, and Certification Marks Thank you to absolutely everyone who came to FOSDEM. RISC-V gets interesting. A reminder that EOMA68 is a Certification Mark. EOMA68-A20 card supply chain progress. Read the Full Update Want to unsubscribe from these emails? Manage your email preferences here. Shipping Returns Terms Privacy Policy Contact Crowd Supply 340 SE 6th Ave Portland, OR 97214 800-554-2014 From desttinghimgame at gmail.com Tue Mar 13 16:12:40 2018 From: desttinghimgame at gmail.com (Louis Pearson) Date: Tue, 13 Mar 2018 11:12:40 -0500 Subject: [Arm-netbook] Fwd: Earth-friendly EOMA68 Computing Devices Update: FOSDEM Recap, RISC-V Update, and Certification Marks In-Reply-To: References: <30556267.20180312192142.5aa6d34619d4e9.11930436@mail187-23.suw11.mandrillapp.com> Message-ID: Woah, that's not an easy to read email on a text-only mailing list. This is just a crowd-supply update email forwarded through the list, and the only important part is this link: https://www.crowdsupply.com/eoma68/micro-desktop/updates/fosdem-recap-risc-v-update-and-certification-marks From lkcl at lkcl.net Tue Mar 13 16:53:32 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Tue, 13 Mar 2018 16:53:32 +0000 Subject: [Arm-netbook] Fwd: Earth-friendly EOMA68 Computing Devices Update: FOSDEM Recap, RISC-V Update, and Certification Marks In-Reply-To: References: <30556267.20180312192142.5aa6d34619d4e9.11930436@mail187-23.suw11.mandrillapp.com> Message-ID: On Tue, Mar 13, 2018 at 4:12 PM, Louis Pearson wrote: > Woah, that's not an easy to read email on a text-only mailing list. yyup. i deliberately switched off HTML precisely so that things like that become instantly and blindingly obvious. many messages with "rich" text hide links that you can't see... or worse try to hide links the sender *doesn't want* you to see. l. From vkontogpls at gmail.com Wed Mar 14 21:25:42 2018 From: vkontogpls at gmail.com (Bill Kontos) Date: Wed, 14 Mar 2018 23:25:42 +0200 Subject: [Arm-netbook] Well, this is interesting... In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 12:04 AM, Louis Pearson wrote: > On Mon, Mar 12, 2018 at 3:44 PM, Christopher Havel > wrote: > >> Neither ARM nor netbook, strictly speaking, but we've veered off topic >> before, so gee why not... >> >> https://hackaday.com/2018/03/12/new-guts-make-old-thinkpads-new/ >> >> Looks vaguely relevant. Those are some awesome machines. r/thinkpad went nuts on the x62 when it got released. > > That is very impressive! I'd rather get the laptop chassis for the > eoma68 for that price though. There are a lot of people who swear by thinkpad keyboards and the lenovo trackpoint. These machines are amazingly built for what they are: the ports fit right into the old holes and the thermals are actually really, really good. Very well designed and if I recall correctly there were some fan noise issues on the early editions which got fixed later on. A question that popped my mind though: We do know that newer nodes up to finFET give better perf/dollar when used on scale so explaining why a sbc like the rpi is possible today is easy. Has the same happened in pcb fabrication? This is a very high end board that costs 700$ and includes the cpu which retails for several hundred bucks. I'm surprised at the cost at which they made these. From eaterjolly at gmail.com Tue Mar 20 18:15:04 2018 From: eaterjolly at gmail.com (Jean Flamelle) Date: Tue, 20 Mar 2018 14:15:04 -0400 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics Message-ID: This is difficult to express so please bear patience. Able manipulators of money do exploit the interest in cryptocurrency to affect the prices thereof, purchase and sell cryptocurrency increasing their stockpile of national and international currencies while sustaining their supply of cryptocurrency. The practice of buying low and selling high, is generally accepted, but I refute it. Not only does no way of proving if they manipulated the price practical, but buying low and selling high can only be possible if someone somewhere tries to change the price of whatever their trade includes. This means the price of bitcoin should be decided by netizens across internet forums and some powerful actor is ensuring that can't happen. As a store of economic influence, I fully agree and support the decision to hold donated bitcoin. However to convert it by means of trade into any other currency, while what is happening is happening, I can't stomach this morally. Individuals who are getting tricked into purchasing cryptocurrency at exactly the wrong moments are losing all the have. This is not simply gambling with high-stakes, there is a concerted effort to deceive people into purchasing bitcoin and other currencies. If anyone sells cryptocurrency now, they very likely risk funding these scams. Please do not sell donated bitcoin or any other donated cryptocurrency for the foresee-able future. This is not a permanent problem, but one we can't know when will be resolved. Thank you. -J. L. From lkcl at lkcl.net Tue Mar 20 18:19:28 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Tue, 20 Mar 2018 18:19:28 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: Message-ID: On Tue, Mar 20, 2018 at 6:15 PM, Jean Flamelle wrote: > This is difficult to express so please bear patience. Able > manipulators of money do exploit the interest in cryptocurrency to > affect the prices thereof, yep, i know. there's no way to regulate or prevent the blatant insider trading and pump-and-dump scams. interestingly, mining is inviolate. l. From lasich at gmail.com Tue Mar 20 18:45:15 2018 From: lasich at gmail.com (Hrvoje Lasic) Date: Tue, 20 Mar 2018 19:45:15 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: Message-ID: On 20 March 2018 at 19:15, Jean Flamelle wrote: > This is difficult to express so please bear patience. Able > manipulators of money do exploit the interest in cryptocurrency to > affect the prices thereof, purchase and sell cryptocurrency increasing > their stockpile of national and international currencies while > sustaining their supply of cryptocurrency. The practice of buying low > and selling high, is generally accepted, but I refute it. Not only > does no way of proving if they manipulated the price practical, but > buying low and selling high can only be possible if someone somewhere > tries to change the price of whatever their trade includes. This means > the price of bitcoin should be decided by netizens across internet > forums and some powerful actor is ensuring that can't happen. > As a store of economic influence, I fully agree and support > the decision to hold donated bitcoin. However to convert it by means > of trade into any other currency, while what is happening is > happening, I can't stomach this morally. Individuals who are getting > tricked into purchasing cryptocurrency at exactly the wrong moments > are losing all the have. This is not simply gambling with high-stakes, > there is a concerted effort to deceive people into purchasing bitcoin > and other currencies. If anyone sells cryptocurrency now, they very > likely risk funding these scams. > Please do not sell donated bitcoin or any other donated > cryptocurrency for the foresee-able future. > This is not a permanent problem, but one we can't know when > will be resolved. > > I think this complete idea has gone very much into wrong direction and it is not going to end well. Cryptos are worth billions but you cant buy pizza. Blockchain is bullet proof technology that is taking notice of any transaction so it is impossible to take money, but it is easy to hack account where people actually hold money, most probably by bank owners them self. Then they say `we are sorry`. But since all this is unregulated, nobody is responsible. Also, the guys with most crypto-money are now most probably calling for regulation so they hope that cryptos will be used for transactions (not only speculating like now is the case) and will be able to hold some of `values` they have in speculating cryptos. Meanwhile, there are very powerful players who actually try to sell as much as possible and in the same time keep hype so more suckers keep buying. Meantime no or very few actual business are build on blockchain idea and I know few that are trying but it is quite slow process for many reasons (regulation, taxes, old business etc.)... > Thank you. > -J. L. > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From phil at hands.com Tue Mar 20 20:44:18 2018 From: phil at hands.com (Philip Hands) Date: Tue, 20 Mar 2018 21:44:18 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: Message-ID: <87efkecw9p.fsf@hands.com> Luke Kenneth Casson Leighton writes: > On Tue, Mar 20, 2018 at 6:15 PM, Jean Flamelle wrote: >> This is difficult to express so please bear patience. Able >> manipulators of money do exploit the interest in cryptocurrency to >> affect the prices thereof, > > yep, i know. there's no way to regulate or prevent the blatant > insider trading and pump-and-dump scams. interestingly, mining is > inviolate. I've no idea why you think that -- it seems to me rather like saying that farming poppies is automatically ethical, regardless of whether you expect anyone to harvest the crop and perhaps sell it to people who then profit and spend the resulting income on weapons, say. Anyway, never mind that -- this seems timely: https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content erm, oops! Cheers, Phil. -- |)| Philip Hands [+44 (0)20 8530 9560] HANDS.COM Ltd. |-| http://www.hands.com/ http://ftp.uk.debian.org/ |(| Hugo-Klemm-Strasse 34, 21075 Hamburg, GERMANY From lkcl at lkcl.net Tue Mar 20 20:59:56 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Tue, 20 Mar 2018 20:59:56 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <87efkecw9p.fsf@hands.com> References: <87efkecw9p.fsf@hands.com> Message-ID: On Tue, Mar 20, 2018 at 8:44 PM, Philip Hands wrote: > Luke Kenneth Casson Leighton writes: > >> On Tue, Mar 20, 2018 at 6:15 PM, Jean Flamelle wrote: >>> This is difficult to express so please bear patience. Able >>> manipulators of money do exploit the interest in cryptocurrency to >>> affect the prices thereof, >> >> yep, i know. there's no way to regulate or prevent the blatant >> insider trading and pump-and-dump scams. interestingly, mining is >> inviolate. > > I've no idea why you think that -- it seems to me rather like saying > that farming poppies is automatically ethical, regardless of whether you > expect anyone to harvest the crop and perhaps sell it to people who > then profit and spend the resulting income on weapons, say. there's a key difference [or there was until those abuse-links were noted...] which is that the transactions are [or were] "neutral". the "money" (the mining reward) was literally created out of thin air, i.e. was not being received as part of a transaction from criminals, not being received as part of drug-dealing, or in exchange for a contract on someone's life or anything else clearly unethical... *and* in addition [up until those abuse-links were noted] there was no way to know if the transaction(s) were quotes good quotes or quotes bad quotes. even _with_ such links (which people will now have to add filters into crypto-mining algorithms in order to discard them), you can clearly see that anyone putting such links is "bad" (and choose not to include them in a block being mined) however for everything else not so identified they *are* unidentifiable. that lack of identifiability makes mining "neutral" rather than "specifically good" or "specifically bad". > Anyway, never mind that -- this seems timely: > > https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content > > erm, oops! sigh yehhh... and they paid transaction fees to put them there. main problem is, distribution of or ownership of bitcoin has now become illegal in many countries... whoops... l. From eaterjolly at gmail.com Wed Mar 21 04:52:45 2018 From: eaterjolly at gmail.com (Jean Flamelle) Date: Wed, 21 Mar 2018 00:52:45 -0400 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> Message-ID: That link goes back to the fundamental concept of illegal numbers. Anywhere a number can be written, illegal data may be written. Anywhere a number can be written immutably, illegal data may be written immutably. This also gets at a fundamental issue of ethics of censorship, as censorship was viewed a few centuries ago: pieces of communication which enable wasted life being disabled & contained. Do these images do that? Can abuse be stopped without censoring images of abuse? === Phil raises an interesting suggestion that what-if blockchain miners weren't neutral? Miners aren't neutral, after all. They just enforce minimal rules. Could technological means be given to enforce cultural rules including with regards to reputation and past economic contributions? What would happen, if so? === @Lasic - There are many large corporations developing blockchain research programs, including banks, microsoft, ibm, institutes funded by grants, etc. This is not a viable complete protocol, but regulators are much more likely begin regulating claims. There is a saying that the code should speak for itself. I think this applies. From wookey at wookware.org Wed Mar 21 12:08:43 2018 From: wookey at wookware.org (Wookey) Date: Wed, 21 Mar 2018 12:08:43 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <87efkecw9p.fsf@hands.com> References: <87efkecw9p.fsf@hands.com> Message-ID: <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> On 2018-03-20 21:44 +0100, Philip Hands wrote: > https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content There's always someone isn't there? This is why we can't have nice things. I'm quite surprised that the format allows inclusion of random extra files. Why isn't it just a list of transaction IDs (or however it works), i.e. the data needed to make the blockchain work. Or are they just abusing some 'Name' or 'Comment' type freeform field? It seems to me that bitcoin needs to fail on energy terms alone - it's mind-bogglingly inefficent (and, contrary to my initial understanding, this problem doesn't get better over time as more coins are mined). Plenty of other cryptocurrency algorithms exist, using much more sensible amounts of energy (Wh/transaction: bitcoin: 634,000, Ethereum: 43,000, (Visa: 1.69) Stellar: 0.03). The tricky bit is making sure that remains true when the thing gets popular. If those numbers are right bitcoin is generating the same emissions as a flight to Spain from the UK (300kG) _per transaction_ which is completely nuts. Buying a beer in the Haymakers really shouldn't have that sort of footprint. Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ From lkcl at lkcl.net Wed Mar 21 13:11:21 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Mar 2018 13:11:21 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Wed, Mar 21, 2018 at 12:08 PM, Wookey wrote: > On 2018-03-20 21:44 +0100, Philip Hands wrote: > >> https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content > > There's always someone isn't there? This is why we can't have nice things. *sigh* i would be curious to know if monero (designed with privacy in mind) suffers the same problem. https://monero.stackexchange.com/questions/3958/what-is-the-format-of-a-block-in-the-monero-blockchain https://monero.stackexchange.com/questions/5916/why-some-coinbase-transactions-have-very-long-extra-field-and-some-short https://monero.stackexchange.com/questions/3595/how-to-use-tx-extra https://monero.stackexchange.com/questions/2549/why-is-the-payment-id-specified-on-a-per-tx-basis apparently the tx-extra field is used as a payment-id... interestingly though it looks like it's down to the miners to decide that field's contents... which would, if that's correct, take away the problem. > I'm quite surprised that the format allows inclusion of random extra > files. Why isn't it just a list of transaction IDs (or however it > works), i.e. the data needed to make the blockchain work. Or are they > just abusing some 'Name' or 'Comment' type freeform field? https://bitcoin.stackexchange.com/questions/29592/can-you-put-additional-data-in-the-payload https://digitalcommons.augustana.edu/cgi/viewcontent.cgi?article=1000&context=cscfaculty ok so there's apparently about 8 separate and distinct methods, all of them "abusing" various fields, some of them security violations / exploiting flaws in the design. oops. > It seems to me that bitcoin needs to fail on energy terms alone - it's > mind-bogglingly inefficent (and, contrary to my initial understanding, > this problem doesn't get better over time as more coins are > mined). arms / energy races don't get better without consensus, and this one's an arms / energy race where (at the moment) individuals can participate as opposed to businesses. *if* people agree - world-wide - to slow down on the mining *then* the hashrate - world-wide - will slow down. however the only way that's actually realistically likely to happen is when the "reward" drops sufficiently (halves every 18 months) for the financial incentive to lower. *then* when the financial incentive lowers, the number of people mining will also lower (non-financially-viable equipment switched off) and the hashrate will correspondingly drop. the problem with that assumption is that the people mining will only be motivated by profit. if they're genuinely interested in going over the 50% share in order to corrupt / control the blocks then that doesn't happen, and bitcoin goes to hell in a handbasket. > Plenty of other cryptocurrency algorithms exist, using much > more sensible amounts of energy (Wh/transaction: bitcoin: 634,000, > Ethereum: 43,000, (Visa: 1.69) Stellar: 0.03). The tricky bit is > making sure that remains true when the thing gets popular. i've been thinking about this, beyond what monero does (which plans to hard-fork to increase the random-access memory usage). monero is not possible to do on a custom ASIC because it deliberately requires large amounts of memory (6gb, 8gb). if you were to make a custom ASIC it would *be* a GPU... therefore you might as well buy... off-the-shelf GPU Cards. beyond that i think you need to specify and agree hard limits about mining capabilities that, if exceeded, result in PENALTIES not REWARDS. such mining contracts would need to be stored *in the blockchain* rather than being hard-coded. miners would be required to sign up to the mining contract (in the blockchain) in order to participate, along-side a declaration of running some CPU benchmark tests which indicate the computing capacity that they are declaring that they intend to use. if they go OUTSIDE of those parameters, by responding too fast on a block (which can be verified by other miners re-running the same algorithms that they did), they get PENALISED *not* REWARDED. if that's combined with the ability to fork a coin *within* a coin (by placing an ID or #tag on transactions that indicate that it is totally separate and completely distinct from other coins *within the sam e blockchain*) the nice thing about such a scheme would be that isolated communities, for example those running off of isolated sporadic internet, on smartphones only rather than having access to GPU resources, could declare the mining contract to be within the capabilities of the average smartphone... and the community as a whole. an "outsider" would then NOT BE ABLE to destabilise their "local" currency. > If those numbers are right bitcoin is generating the same emissions as > a flight to Spain from the UK (300kG) _per transaction_ which is > completely nuts. Buying a beer in the Haymakers really shouldn't have > that sort of footprint. even up until 2016 it didn't... but it does now. and it's an exponential rise of about 20% *per month* and shows no sign of slowing https://blockchain.info/charts/hash-rate l. From desttinghimgame at gmail.com Wed Mar 21 18:24:24 2018 From: desttinghimgame at gmail.com (Louis Pearson) Date: Wed, 21 Mar 2018 13:24:24 -0500 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> Message-ID: On Wed, Mar 21, 2018 at 8:11 AM, Luke Kenneth Casson Leighton wrote: > --- > crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 > > > On Wed, Mar 21, 2018 at 12:08 PM, Wookey wrote: > > On 2018-03-20 21:44 +0100, Philip Hands wrote: > > > >> https://www.theguardian.com/technology/2018/mar/20/child- > abuse-imagery-bitcoin-blockchain-illegal-content > > > > There's always someone isn't there? This is why we can't have nice > things. > I believe that grin ( http://grin-tech.org/ ) should be immune to this from the way that its transactions work. From what I remember off the top of my head anyway. I really like the way that the developers are approaching creating a new cryptocurrency. From listmaster at beauxbead.com Wed Mar 21 19:48:53 2018 From: listmaster at beauxbead.com (KRT Listmaster) Date: Wed, 21 Mar 2018 13:48:53 -0600 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> Message-ID: <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Now the discussion is starting to get interesting.... On 03/21/2018 07:11 AM, Luke Kenneth Casson Leighton wrote: [...] > > *if* people agree - world-wide - to slow down on the mining *then* > the hashrate - world-wide - will slow down. however the only way > that's actually realistically likely to happen is when the "reward" > drops sufficiently (halves every 18 months) for the financial > incentive to lower. > > *then* when the financial incentive lowers, the number of people > mining will also lower (non-financially-viable equipment switched off) > and the hashrate will correspondingly drop. > > the problem with that assumption is that the people mining will only > be motivated by profit. if they're genuinely interested in going over > the 50% share in order to corrupt / control the blocks then that > doesn't happen, and bitcoin goes to hell in a handbasket. > Exactly, this is something most people fail to understand about mining. There's nothing inherent to the algorithm or the protocol that makes Bitcoin so power-hungry. If the entire network agreed to all shut off the ASICs and run everything on low-powered SBCs running on photovoltaic cells only (for example), the network *itself* would continue to function just fine. The difficulty would adjust itself (eventually, up to two weeks for BTC) to match the processing power available to it. That's the design. When I first came across Bitcoin, the idea was to let your wallet solo-mine all the time in the background. Pools didn't even exist yet when I first tried mining (on an ASUS netbook in rural Peru, no less), or at least I wasn't aware of them. At the time, I compared it to something like SETI at home, which looks for signals from space-brothers as a screen-saver, basically. Everyone mines a little bit, and everyone gets a little bit of the reward. One CPU, one vote. That was the original intent. The problem is greed. Some clever bastard decided "Why limit myself to just a single background process?" That's when dedicated mining rigs started, and the arms race began, even before GPUs came into the picture. > > i've been thinking about this, beyond what monero does (which plans > to hard-fork to increase the random-access memory usage). monero is > not possible to do on a custom ASIC because it deliberately requires > large amounts of memory (6gb, 8gb). if you were to make a custom ASIC > it would *be* a GPU... therefore you might as well buy... > off-the-shelf GPU Cards. > > beyond that i think you need to specify and agree hard limits about > mining capabilities that, if exceeded, result in PENALTIES not > REWARDS. such mining contracts would need to be stored *in the > blockchain* rather than being hard-coded. miners would be required to > sign up to the mining contract (in the blockchain) in order to > participate, along-side a declaration of running some CPU benchmark > tests which indicate the computing capacity that they are declaring > that they intend to use. > > if they go OUTSIDE of those parameters, by responding too fast on a > block (which can be verified by other miners re-running the same > algorithms that they did), they get PENALISED *not* REWARDED. > Yep, the Monero devs are very much dedicated to "egalitarian mining", which I think gets closer to Satoshi's original intent,[1][2] and makes something like javascript browser-based mining possible, which then opens up an entire new set of ethical concerns, even if it's a tiny step towards fair hashrate distribution. [3][4] What are the other options? POS coins take very little energy, but "the rich get richer", and the more coin you have, the more you stake, etc. Not very egalitarian either. Or some insta-premine like Ripple, where I just make a bajillion tokens out of thin air and then start selling them for a tenth of a cent each? Neither of those seem like better distribution schemes to me. So, how do we get newly mined coins into as many hands as possible? Definitely an interesting conundrum. - krt --- [1] https://getmonero.org/2018/02/11/PoW-change-and-key-reuse.html [2] https://cointelegraph.com/news/bitmain-announces-new-monero-mining-antminer-x3-cryptos-devs-say-will-not-work [3] https://arxiv.org/pdf/1803.02887.pdf [4] https://www.theregister.co.uk/2018/02/27/ethical_coinhive/ -- This email account is used for list management only. https://strangetimes.observer/ From samhuntress at gmail.com Wed Mar 21 20:58:10 2018 From: samhuntress at gmail.com (Sam Huntress) Date: Wed, 21 Mar 2018 16:58:10 -0400 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: Excessive power consumption by bitcoin miners is an indication that either power is undervalued, bitcoin is overvalued, or both. > On Mar 21, 2018, at 3:48 PM, KRT Listmaster wrote: > > Now the discussion is starting to get interesting.... > > On 03/21/2018 07:11 AM, Luke Kenneth Casson Leighton wrote: > > [...] > >> >> *if* people agree - world-wide - to slow down on the mining *then* >> the hashrate - world-wide - will slow down. however the only way >> that's actually realistically likely to happen is when the "reward" >> drops sufficiently (halves every 18 months) for the financial >> incentive to lower. >> >> *then* when the financial incentive lowers, the number of people >> mining will also lower (non-financially-viable equipment switched off) >> and the hashrate will correspondingly drop. >> >> the problem with that assumption is that the people mining will only >> be motivated by profit. if they're genuinely interested in going over >> the 50% share in order to corrupt / control the blocks then that >> doesn't happen, and bitcoin goes to hell in a handbasket. >> > > > Exactly, this is something most people fail to understand about mining. > There's nothing inherent to the algorithm or the protocol that makes > Bitcoin so power-hungry. If the entire network agreed to all shut off > the ASICs and run everything on low-powered SBCs running on photovoltaic > cells only (for example), the network *itself* would continue to > function just fine. The difficulty would adjust itself (eventually, up > to two weeks for BTC) to match the processing power available to it. > That's the design. > > When I first came across Bitcoin, the idea was to let your wallet > solo-mine all the time in the background. Pools didn't even exist yet > when I first tried mining (on an ASUS netbook in rural Peru, no less), > or at least I wasn't aware of them. At the time, I compared it to > something like SETI at home, which looks for signals from space-brothers as > a screen-saver, basically. Everyone mines a little bit, and everyone > gets a little bit of the reward. One CPU, one vote. That was the > original intent. > > The problem is greed. Some clever bastard decided "Why limit myself to > just a single background process?" That's when dedicated mining rigs > started, and the arms race began, even before GPUs came into the picture. > >> >> i've been thinking about this, beyond what monero does (which plans >> to hard-fork to increase the random-access memory usage). monero is >> not possible to do on a custom ASIC because it deliberately requires >> large amounts of memory (6gb, 8gb). if you were to make a custom ASIC >> it would *be* a GPU... therefore you might as well buy... >> off-the-shelf GPU Cards. >> >> beyond that i think you need to specify and agree hard limits about >> mining capabilities that, if exceeded, result in PENALTIES not >> REWARDS. such mining contracts would need to be stored *in the >> blockchain* rather than being hard-coded. miners would be required to >> sign up to the mining contract (in the blockchain) in order to >> participate, along-side a declaration of running some CPU benchmark >> tests which indicate the computing capacity that they are declaring >> that they intend to use. >> >> if they go OUTSIDE of those parameters, by responding too fast on a >> block (which can be verified by other miners re-running the same >> algorithms that they did), they get PENALISED *not* REWARDED. >> > > Yep, the Monero devs are very much dedicated to "egalitarian mining", > which I think gets closer to Satoshi's original intent,[1][2] and makes > something like javascript browser-based mining possible, which then > opens up an entire new set of ethical concerns, even if it's a tiny step > towards fair hashrate distribution. [3][4] > > > What are the other options? POS coins take very little energy, but "the > rich get richer", and the more coin you have, the more you stake, etc. > Not very egalitarian either. Or some insta-premine like Ripple, where I > just make a bajillion tokens out of thin air and then start selling them > for a tenth of a cent each? Neither of those seem like better > distribution schemes to me. So, how do we get newly mined coins into as > many hands as possible? > > Definitely an interesting conundrum. > > - krt > > --- > > [1] https://getmonero.org/2018/02/11/PoW-change-and-key-reuse.html > [2] > https://cointelegraph.com/news/bitmain-announces-new-monero-mining-antminer-x3-cryptos-devs-say-will-not-work > [3] https://arxiv.org/pdf/1803.02887.pdf > [4] https://www.theregister.co.uk/2018/02/27/ethical_coinhive/ > > > -- > This email account is used for list management only. > https://strangetimes.observer/ > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From calmstorm at posteo.de Wed Mar 21 21:03:10 2018 From: calmstorm at posteo.de (zap) Date: Wed, 21 Mar 2018 17:03:10 -0400 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: >> Yep, the Monero devs are very much dedicated to "egalitarian mining", >> which I think gets closer to Satoshi's original intent,[1][2] and makes >> something like javascript browser-based mining possible, which then >> opens up an entire new set of ethical concerns, even if it's a tiny step >> towards fair hashrate distribution. [3][4] >> >> >> What are the other options? POS coins take very little energy, but "the >> rich get richer", and the more coin you have, the more you stake, etc. >> Not very egalitarian either. Or some insta-premine like Ripple, where I >> just make a bajillion tokens out of thin air and then start selling them >> for a tenth of a cent each? Neither of those seem like better >> distribution schemes to me. So, how do we get newly mined coins into as >> many hands as possible? >> >> Definitely an interesting conundrum. >> >> - krt >> >> --- I think it is safe to say that cryptocurrencies aren't very feasible right now. My thoughts are: Liberapay is a good idea, I don't know if anything else is anywhere near as good. >> >> [1] https://getmonero.org/2018/02/11/PoW-change-and-key-reuse.html >> [2] >> https://cointelegraph.com/news/bitmain-announces-new-monero-mining-antminer-x3-cryptos-devs-say-will-not-work >> [3] https://arxiv.org/pdf/1803.02887.pdf >> [4] https://www.theregister.co.uk/2018/02/27/ethical_coinhive/ >> >> >> -- >> This email account is used for list management only. >> https://strangetimes.observer/ >> >> _______________________________________________ >> arm-netbook mailing list arm-netbook at lists.phcomp.co.uk >> http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook >> Send large attachments to arm-netbook at files.phcomp.co.uk > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From lasich at gmail.com Wed Mar 21 21:16:52 2018 From: lasich at gmail.com (Hrvoje Lasic) Date: Wed, 21 Mar 2018 22:16:52 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: On 21 March 2018 at 21:58, Sam Huntress wrote: > Excessive power consumption by bitcoin miners is an indication that either > power is undervalued, bitcoin is overvalued, or both. ask yourself simple question: If all cryptocurencies are gone tomorrow, what would you miss? From Marqueteur at FineArtMarquetry.com Wed Mar 21 21:26:29 2018 From: Marqueteur at FineArtMarquetry.com (Tor, the Marqueteur) Date: Wed, 21 Mar 2018 11:26:29 -1000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: <596bc1df-5e67-0c17-567a-09f75ac76d37@FineArtMarquetry.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 03/21/2018 11:03 AM, zap wrote: > > I think it is safe to say that cryptocurrencies aren't very > feasible right now. > > My thoughts are: Liberapay is a good idea, > > I don't know if anything else is anywhere near as good. Liberapay is good, though I'm afraid their funding model isn't sustainable in the long term. The development costs are probably mostly upfront, but there are the ongoing server costs, and much more significantly, the costs of dealing with disputes, fraud, etc. I think the disputes should be fairly minimal given the nature of their transactions, but they are sure to come up, and it will take manpower to deal with them. In the interests of their long term sustainability, I would like to see them figure out what they need to deal with that, and charge it. Probably somewhere around 1-3% from what I gather. It would be good to let the donor choose whether to pay it on top, or give a set amount and take it out of what the recipient gets. I really like their design of figuring out their costs for getting money in/out of Liberapay, and charging that when those transactions occur, so people know where their money is going. Tor - -- Tor Chantara https://art.torchantara.com GPG Key: 2BE1 426E 34EA D253 D583 9DE4 B866 0375 134B 48FB *Be wary of unsigned emails* Stop spying: http://www.resetthenet.org/ -----BEGIN PGP SIGNATURE----- iF0EARECAB0WIQQr4UJuNOrSU9WDneS4ZgN1E0tI+wUCWrLN+wAKCRC4ZgN1E0tI +9yVAJ9zUeFOkXhjVvGPz43+n9PsLc3BHgCeLmMmvKvEjfIAI+lcA/Tr/9WhU+A= =aYlx -----END PGP SIGNATURE----- From lkcl at lkcl.net Thu Mar 22 04:10:18 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Mar 2018 04:10:18 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: On Wed, Mar 21, 2018 at 9:16 PM, Hrvoje Lasic wrote: > ask yourself simple question: If all cryptocurencies are gone tomorrow, > what would you miss? crypto-currencies are literally the first time in human history where contracts can be made and honoured between one or more parties in an atomic fashion *WITHOUT* third party intervention or arbitration. what would be missed? we would return to centralisation and abuse of power at the hands of central banks, corrupt governments, expensive and flawed judicial systems, overcharging and underpayment by unethical insurance companies - the list goes on and on. crypto-currencies are right now in their infancy. l. From lasich at gmail.com Thu Mar 22 06:40:26 2018 From: lasich at gmail.com (Hrvoje Lasic) Date: Thu, 22 Mar 2018 07:40:26 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: I completely disagree. You are mixing block chain and cryptocurrencies. Also, I disagree on second point. We still have ` power at the hands of central banks, corrupt governments, expensive and flawed judicial systems, overcharging and underpayment by unethical insurance companies` *This is my main point, there is no actual business processes implemented.* Now this is all on speculation basis because it is `great technology`. Now is more about greed then something that can be good for all of us. Not too mention very inefficient process for exchange that spend way too much energy, just the opposite what main idea was. Paying with cryptos looks expensive right now. Then fraud practices, literally taking your money etc, criminal practices etc. These are all valid problems. I think we all agree that blockchain is really good technology but it should not be about speculation. Also, I am a bit skeptical about how we are to avoid third parties completely. Wherever there are humans there could be disputes, frauds etc. Then again you need some kind of regulation. On 22 March 2018 at 05:10, Luke Kenneth Casson Leighton wrote: > On Wed, Mar 21, 2018 at 9:16 PM, Hrvoje Lasic wrote: > > > ask yourself simple question: If all cryptocurencies are gone tomorrow, > > what would you miss? > > crypto-currencies are literally the first time in human history where > contracts can be made and honoured between one or more parties in an > atomic fashion *WITHOUT* third party intervention or arbitration. > > what would be missed? we would return to centralisation and abuse of > power at the hands of central banks, corrupt governments, expensive > and flawed judicial systems, overcharging and underpayment by > unethical insurance companies - the list goes on and on. > > crypto-currencies are right now in their infancy. > > l. > > _______________________________________________ > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk > From lkcl at lkcl.net Thu Mar 22 06:53:42 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Mar 2018 06:53:42 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: On Thu, Mar 22, 2018 at 6:40 AM, Hrvoje Lasic wrote: > I completely disagree. You are mixing block chain and cryptocurrencies. not entirely > Also, I disagree on second point. We still have ` power at the hands of > central banks, corrupt governments, expensive and flawed judicial systems, > overcharging and underpayment by unethical insurance companies` *This is my > main point, there is no actual business processes implemented.* that's not entirely true... and bear in mind i did say, "it's early days". ripple implements a business process, and cryptokitties definitely implements a business process. > Now this is > all on speculation basis because it is `great technology`. Now is more > about greed then something that can be good for all of us. yes that's very true. like i said: "early days". > Not too mention very inefficient process for exchange that spend way too > much energy, just the opposite what main idea was. Paying with cryptos > looks expensive right now. paying with *bitcoin* looks expensive [but didn't only 2 years ago] > Then fraud practices, literally taking your > money etc, criminal practices etc. These are all valid problems. look at where the fraud primarily occurs: i think you'll find that there's a direct correlation between *central choke-points* and the fraud. oh. sorry, i forgot to add the other qualifier to crypto-currencies / blockchain: *individuals* have to take *direct* responsibility [where previously they could abdicate that responsibility to a third party / central authority]. if they fail to take responsibility, they get ripped off [viruses, lost wallet passwords etc.]. > I think we all agree that blockchain is really good technology but it > should not be about speculation. because the carrot dangling free money in front of people gets them interested like nothing else.... > Also, I am a bit skeptical about how we > are to avoid third parties completely. by designing algorithms that take that into account. Zero Knowledge Proofs, Pederson Committments, proper peer-to-peer distributed protocols and much more. i reiterate: it's early days yet. > Wherever there are humans there > could be disputes, frauds etc. if there is fraud and disputes, then the design of the algorithm has failed and/or the user has not taken proper responsibility. again: i reiterate, it's early days yet. > Then again you need some kind of regulation. if regulation is needed then the design of the algorithm has failed. again, i reiterate: it's early days yet. l. From phil at hands.com Thu Mar 22 09:45:08 2018 From: phil at hands.com (Philip Hands) Date: Thu, 22 Mar 2018 10:45:08 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> Message-ID: <87y3iksau3.fsf@hands.com> Luke Kenneth Casson Leighton writes: > oh. sorry, i forgot to add the other qualifier to > crypto-currencies / blockchain: *individuals* have to take *direct* > responsibility [where previously they could abdicate that > responsibility to a third party / central authority]. if they fail to > take responsibility, they get ripped off [viruses, lost wallet > passwords etc.]. I'd like to announce a revolution in data privacy. Individuals can take control of their data, and ensure that it doesn't leak into the hands of people that they don't want to have it. Just install PGP. Thirty years later, where do the vast majority of people do their crypto? In Google's data centres. Even people that have been using crypto for 30 years almost never actually encrypt their email. I still sign most of mine, but that's really just nostalgia for the time when I still thought that we could expect everyone to end up doing that sort of thing. I'm pretty good at looking after my keys, and would not for instance need to revoke them if you stole my laptop, so am in a tiny minority. Would I be willing to make my current account balance contingent on my not screing that up? NO! and I definitely want a court to go to if my bank tells me they lost track of my money. I think you can be sure that the people will fight viciously to avoid taking any sort of responsibility. See: Facebook & Cambridge Analytica. Cheers, Phil. -- |)| Philip Hands [+44 (0)20 8530 9560] HANDS.COM Ltd. |-| http://www.hands.com/ http://ftp.uk.debian.org/ |(| Hugo-Klemm-Strasse 34, 21075 Hamburg, GERMANY From lkcl at lkcl.net Thu Mar 22 09:58:18 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Mar 2018 09:58:18 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <87y3iksau3.fsf@hands.com> References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> <87y3iksau3.fsf@hands.com> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Thu, Mar 22, 2018 at 9:45 AM, Philip Hands wrote: > I'd like to announce a revolution in data privacy. Individuals can take > control of their data, and ensure that it doesn't leak into the hands of > people that they don't want to have it. > > Just install PGP. hurrah! :) so let's think about that case for a minute... PGP/GPG were designed for encrypting / signing static data. if that was all that was involved, crypto-currencies would have been replaced by people using PGP/GPG to sign static text files containing "money" or the digital representation of the same. the difference then is that extra step - the guarantees of an "atomic transaction". so it's the combination of *both* factors: putting responsibility into individuals' hands (so that banks cannot literally empty your account if they so choose) *and* the atomic inviolate contract / transaction guarantees. > I'm pretty good at looking after my keys, and would not for instance > need to revoke them if you stole my laptop, so am in a tiny minority. yehyeh > Would I be willing to make my current account balance contingent on my > not screing that up? ... did you hear about the banks in ... i think it was portugal italy, when they were going bankrupt they decided blithely to just... take peoples' savings. one pensioner actually committed suicide as a result. they would almost have certainly done that with the blessing of the courts / government. so going to court would have no effect. > I think you can be sure that the people will fight viciously to avoid > taking any sort of responsibility. See: Facebook & Cambridge Analytica. i shouldn't laugh at that happening, but i can't help it. l. From phil at hands.com Thu Mar 22 10:29:21 2018 From: phil at hands.com (Philip Hands) Date: Thu, 22 Mar 2018 11:29:21 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> <87y3iksau3.fsf@hands.com> Message-ID: <87vados8se.fsf@hands.com> Luke Kenneth Casson Leighton writes: > --- > crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 > > > On Thu, Mar 22, 2018 at 9:45 AM, Philip Hands wrote: > >> I'd like to announce a revolution in data privacy. Individuals can take >> control of their data, and ensure that it doesn't leak into the hands of >> people that they don't want to have it. >> >> Just install PGP. > > hurrah! :) > > so let's think about that case for a minute... PGP/GPG were designed > for encrypting / signing static data. if that was all that was > involved, crypto-currencies would have been replaced by people using > PGP/GPG to sign static text files containing "money" or the digital > representation of the same. You miss my point completely. What I was saying was that we have the existing experiment of a decentralised system, capable of providing a significant benefit to the public, if only they were willing to take some responsibility. We ran the experiment for decades, and the point at which it partially succeeded was when the likes of whatsap provided people with a way of avoiding responsibilty. I therefore suspect that citing benefits of crypto-currencies that will acrue, if only the public would take responsibilty for themselves, is pointless. Cheers, Phil. -- |)| Philip Hands [+44 (0)20 8530 9560] HANDS.COM Ltd. |-| http://www.hands.com/ http://ftp.uk.debian.org/ |(| Hugo-Klemm-Strasse 34, 21075 Hamburg, GERMANY From lkcl at lkcl.net Thu Mar 22 10:49:40 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Mar 2018 10:49:40 +0000 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <87vados8se.fsf@hands.com> References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> <87y3iksau3.fsf@hands.com> <87vados8se.fsf@hands.com> Message-ID: --- crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68 On Thu, Mar 22, 2018 at 10:29 AM, Philip Hands wrote: >> so let's think about that case for a minute... PGP/GPG were designed >> for encrypting / signing static data. if that was all that was >> involved, crypto-currencies would have been replaced by people using >> PGP/GPG to sign static text files containing "money" or the digital >> representation of the same. > > You miss my point completely. mmm ... more that i've not made mine clear (apoligies) > What I was saying was that we have the existing experiment of a > decentralised system, capable of providing a significant benefit to the > public, if only they were willing to take some responsibility. yes, i agree, and understand your point completely. however look at the steps that are required, that the users are required to take, to achieve the same result. it's too much for the average person to cope with, isn't it? 20+ years shows that it's too much... ... yet put pretty much the exact same crypto-primitives from OpenSSL into a crypto-currency wallet, and suddenly they're interested... because it's *convenient and easy*. > We ran the experiment for decades, and the point at which it partially > succeeded was when the likes of whatsap provided people with a way of > avoiding responsibilty. there's another one... what's it called... telegram? that has become extremely popular amongst libertarians because of the convenience. l. From bob at fourtheye.org Thu Mar 22 11:13:38 2018 From: bob at fourtheye.org (Robert Wilkinson) Date: Thu, 22 Mar 2018 12:13:38 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: References: <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> <87y3iksau3.fsf@hands.com> <87vados8se.fsf@hands.com> Message-ID: <20180322111338.GA9625@fourtheye.org> On Thu, Mar 22, 2018 at 10:49:40AM +0000, Luke Kenneth Casson Leighton wrote: > --- > > We ran the experiment for decades, and the point at which it partially > > succeeded was when the likes of whatsap provided people with a way of > > avoiding responsibilty. > > there's another one... what's it called... telegram? that has become > extremely popular amongst libertarians because of the convenience. > > l. I think that Telegram use suspicious crypto and I know that the Russian government has been leaning on them to hand things over. Maybe you were thinking of https://www.signal.org/ ? Bob -- BOFH excuse #363: Out of cards on drive D: From lists at sumpfralle.de Fri Mar 23 23:21:34 2018 From: lists at sumpfralle.de (Lars Kruse) Date: Sat, 24 Mar 2018 00:21:34 +0100 Subject: [Arm-netbook] Urgent statement on Cryptocurrency ethics In-Reply-To: <596bc1df-5e67-0c17-567a-09f75ac76d37@FineArtMarquetry.com> References: <87efkecw9p.fsf@hands.com> <20180321120843.tjuvulk7i35qjtyw@mail.wookware.org> <33980edc-c5c9-da86-ef74-c816f36afa25@beauxbead.com> <596bc1df-5e67-0c17-567a-09f75ac76d37@FineArtMarquetry.com> Message-ID: <20180324002134.0baa343f@erker.lan> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi, Am Wed, 21 Mar 2018 11:26:29 -1000 schrieb "Tor, the Marqueteur" : > In the interests of their long term sustainability, I would like to see > them figure out what they need to deal with that, and charge it. > Probably somewhere around 1-3% from what I gather. > [..] I really appreciate their current model: they ask for a voluntary amount of long-term donation right after sign-up. The default value is zero. I was impressed by this and committed a small weekly sum. The statistics show that currently the users of liberapay honor this open approach: the group [1], the founder [2] and the organization [3] of liberapay collect a combined sum of 420 Euros per week. This is quite impressive, considering that currently approximately 2200 Euros are distributed per week [4] among all creators. This funding ratio (currently almost 20%) will surely drop with a growing number of users. In general the strategy looks very healthy and honest to me. Cheers, Lars [1] https://liberapay.com/Liberapay/ [2] https://liberapay.com/Changaco/ [3] https://liberapay.com/LiberapayOrg/ [4] https://liberapay.com/about/stats -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEIadnVgKe6dvS1V5eUwv15HPR1ZQFAlq1i/8ACgkQUwv15HPR 1ZRSKw//WYIrOzjpsUkXQmnKXTZ5pFFCNfcUKjQMUVINiMuSnCMk3NGYlth8406+ LJ/YDbd9uI6dDxquY7u9+dsAKx2CL/cxAID6uag89jWanfJ2fC2N5+MnL7ORmltp oKv9drn9qAO7olkwepFNCKlRHGRiNwNzc8cd46zK1mHkefRT2Ie6Jg8AuD9ecpT5 ubqjjm+wWmPKGjLzSlvgGcBOlpn/04k8aV7W4px7PqDRbkvDQvDvH6ViDo3dbiAh ACP3uz3i03s1qBXLCVp5ywrG9iuAaAnLvwowmPCp3OalTD06EplpTfQwa7HTln1k HI/QoEfWi3J2oGHjjoUbc2zEzrTCTD2fX7iAgtrCFwi9318foPXO9RDnxS8frb4n ds7HR5r2inZZBahjk0jXZBM/850DMiWPZZHoqfIy/x5hqYsq88XKYZrCDbUmU/Dy y9qOHiw52pqGnDQycJkS/b0sOCkU1VdARCcDAKnvx3e7tePXJXcTfC0vDLsHLDUJ p9Kj6IUtt41UZFvmp744uAysnxbbSLpBjoZSqZglDdIoTFkB6Ofc+WPOCaFrh9Gw tYeKiYP6F7tA3vJYDip8xZZR8ELMMV0lAXLPWtkb6FAg9iD9Mi1zQTChvCaBCnhf QDA0uxszvWTtpyGlgN1LHbySjrTHrJbX4kevtGLN/e0vvoXE3gc= =bHBc -----END PGP SIGNATURE----- From dumblob at gmail.com Wed Mar 28 21:40:21 2018 From: dumblob at gmail.com (dumblob) Date: Wed, 28 Mar 2018 22:40:21 +0200 Subject: [Arm-netbook] Turris MOX - made from modules with a fast universal connection between them (PCI-E 1x, USB 2.0, 2.5Gbit ethernet) Message-ID: Hi, yesterday I stumbled upon Turris MOX, a modular computer. It seems to be an interesting endeavor which might be easily extended with a graphic card module or any card module you could think of. The connection between modules is a standard PCIe connector (but no the PCIe pinout!) with pins of PCIe 1x, USB 2.0, and 2.5 Gbit ethernet. https://www.indiegogo.com/projects/turris-mox-modular-open-source-router-wifi/coming_soon/pies Enjoy! -- Jan P.S. Disclaimer: I once met some of the authors in person. From lkcl at lkcl.net Wed Mar 28 21:48:05 2018 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 28 Mar 2018 21:48:05 +0100 Subject: [Arm-netbook] Turris MOX - made from modules with a fast universal connection between them (PCI-E 1x, USB 2.0, 2.5Gbit ethernet) In-Reply-To: References: Message-ID: On Wed, Mar 28, 2018 at 9:40 PM, dumblob wrote: > Hi, > > yesterday I stumbled upon Turris MOX, a modular computer. neat! From calmstorm at posteo.de Wed Mar 28 21:58:05 2018 From: calmstorm at posteo.de (zap) Date: Wed, 28 Mar 2018 16:58:05 -0400 Subject: [Arm-netbook] Turris MOX - made from modules with a fast universal connection between them (PCI-E 1x, USB 2.0, 2.5Gbit ethernet) In-Reply-To: References: Message-ID: On 03/28/18 16:48, Luke Kenneth Casson Leighton wrote: > On Wed, Mar 28, 2018 at 9:40 PM, dumblob wrote: >> Hi, >> >> yesterday I stumbled upon Turris MOX, a modular computer. > neat! > > _______________________________________________ Can someone tell me if this router is free software compatible or not? > arm-netbook mailing list arm-netbook at lists.phcomp.co.uk > http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook > Send large attachments to arm-netbook at files.phcomp.co.uk From calmstorm at posteo.de Sat Mar 31 03:32:40 2018 From: calmstorm at posteo.de (zap) Date: Fri, 30 Mar 2018 22:32:40 -0400 Subject: [Arm-netbook] Turris MOX - made from modules with a fast universal connection between them (PCI-E 1x, USB 2.0, 2.5Gbit ethernet) In-Reply-To: References: Message-ID: <142604bc-e003-fe48-3123-7634fb1a4d1c@posteo.de> Bad news, the wifi module needs a blob for turris MOX. So yeah, Luke... I think you should make a free software router/usb wifi adapter. ;) And anyone looking at turris Mox should know that it is not as of now free software. On 03/28/18 16:58, zap wrote: > > On 03/28/18 16:48, Luke Kenneth Casson Leighton wrote: >> On Wed, Mar 28, 2018 at 9:40 PM, dumblob wrote: >>> Hi, >>> >>> yesterday I stumbled upon Turris MOX, a modular computer. >> neat! >> >> _______________________________________________ > Can someone tell me if this router is free software compatible or not? >> arm-netbook mailing list arm-netbook at lists.phcomp.co.uk >> http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook >> Send large attachments to arm-netbook at files.phcomp.co.uk