On 06/23/2017 11:29 AM, Bill Kontos wrote:
On Fri, Jun 23, 2017 at 4:27 PM, David Niklas doark@mail.com wrote:
Now can we get back to how to do the reverse engineering itself?
Someone should suggest Luc to open a monthly patreon-styled donation page for working on lima.
If ARM is still as openly hostile as they appear to be towards reverse engineering of their GPU designs, I don't think that pursuing lima development is a very good option. I suspect that if development of the driver resumes, they will probably continue their horrible crusade against Luc and I don't want to see that happen to anybody.
I think that if we are really trying to play the long game here, then that money would be better spent working towards the development of an open GPU design that we can can use in future risc-v SOCs. Perhaps another option that might be more amenable and rewarding for Luc. Would be to open a monthly patron-styled donation page not to develop lima drivers, but instead to work together with people like Jeff Bush (the main guy developming the Nyuzi GPGPU: https://github.com/jbush001/NyuziProcessor ). To me (and I haven't read into it much) it seems like, the focus thus far for the Nyuzi project is only on developing the GPGPU and not so much looking at it from the point of view of, how would we write a Linux driver for this device. I think if we focused some money on Luc (driver developer) and the Nyuzi (hardware developer) developers, then they might be more inspired to focus their efforts on producing a quality GPGPU and supporting drivers.
Honestly, I think that working together with people who are openly developing a GPU design and who would probably be quite welcoming of assistance from Luc would be a much better situation for him than if he went back to trying to working on the lima driver. Perhaps a company like si-five ( https://www.sifive.com/ ) or the risc-v organization might even consider trying to sponsor a collaboration like this as having a good open GPU implementation would certainly be a boon to them in the future.
Anyhow, that is my two cents.
-Mike
On Fri, Jun 23, 2017 at 10:48 PM, Mike Leimon leimon@gmail.com wrote:
Honestly, I think that working together with people who are openly developing a GPU design and who would probably be quite welcoming of assistance from Luc would be a much better situation for him than if he went back to trying to working on the lima driver. Perhaps a company like si-five ( https://www.sifive.com/ ) or the risc-v organization might even consider trying to sponsor a collaboration like this as having a good open GPU implementation would certainly be a boon to them in the future.
Anyhow, that is my two cents.
That is a really good idea and luke had mentioned it multiple times in the past but keep in mind that gpus are an extreme patent minefield these days. Heck, intel can't solve some problems like memory compression on their gpus. But maybe someone should propose this to sifive or the risc v foundation. Something like a Berkeley sponsored program would be really interesting.
On 24 Jun 2017, at 6:03 AM, Bill Kontos vkontogpls@gmail.com wrote:
On Fri, Jun 23, 2017 at 10:48 PM, Mike Leimon leimon@gmail.com wrote:
Honestly, I think that working together with people who are openly developing a GPU design and who would probably be quite welcoming of assistance from Luc would be a much better situation for him than if he went back to trying to working on the lima driver. Perhaps a company like si-five ( https://www.sifive.com/ ) or the risc-v organization might even consider trying to sponsor a collaboration like this as having a good open GPU implementation would certainly be a boon to them in the future.
Anyhow, that is my two cents.
That is a really good idea and luke had mentioned it multiple times in the past but keep in mind that gpus are an extreme patent minefield these days. Heck, intel can't solve some problems like memory compression on their gpus. But maybe someone should propose this to sifive or the risc v foundation. Something like a Berkeley sponsored program would be really interesting.
Please excuse my ignorance in such matters but would it be possible to use a RISC-V or FPGA chip as an interim eGPU until such time that a more specialised chip can be developed and released?
I appreciate that RISC-V/FPGA chips are not likely to be well-suited to the task of GPU processing but perhaps they would be better than no GPU at all.
Once a specialised libre GPU has been developed, the RISC-V / FPGA chips could be repurposed as a CPU for other projects/computers/laptops/etc. and, hence, ensuring that they don't go to waste.
- Bluey
On Sat, Jun 24, 2017 at 9:55 AM, Bluey bluey@smallfootprint.info wrote:
Please excuse my ignorance in such matters but would it be possible to use a RISC-V or FPGA chip as an interim eGPU until such time that a more specialised chip can be developed and released?
on its own (as-is), no. however with certain very very specific and in some cases specialised SIMD instructions a reasonable approximation can be had. these operations are:
* SIMD "and" for a bit-wise zero check (as large as possible) * inverse-squared function (for 1/x^2) - a very common operation in 3D * SIMD 12-14 bit accurate divide operation.
this latter turns out to be "good enough" for the majority of 3D operations, where the accuracy on screens which only *have* 1920 pixels (11 bits being sufficient) division calculations beyond 12-14 bits is completely and utterly redundant... *under certain circumstances*.
the point being that a divide operation which only requires 12-14 bits of accuracy may complete in half the time, thus dramatically saving on CPU cycles.
I appreciate that RISC-V/FPGA chips are not likely to be well-suited to the task of GPU processing but perhaps they would be better than no GPU at all.
not "and be power-efficient at the same time"
Once a specialised libre GPU has been developed, the RISC-V / FPGA chips could be repurposed as a CPU for other projects/computers/laptops/etc. and, hence, ensuring that they don't go to waste.
this was the reasoning behind ICubeCorp's "UPU" - Unified Processing Unit - which unfortunately they kept proprietary. i tried to help them to understand the need to release the full boot initialisation source code and to comply with the GPL but they did not follow up.
l.
On Fri, Jun 23, 2017 at 8:48 PM, Mike Leimon leimon@gmail.com wrote:
If ARM is still as openly hostile as they appear to be towards reverse engineering of their GPU designs, I don't think that pursuing lima development is a very good option. I suspect that if development of the driver resumes, they will probably continue their horrible crusade against Luc and I don't want to see that happen to anybody.
well forewarned is forearmed. and it's quite easy, when reverse-engineering, to find security flaws. for every unethical action that ARM takes it would be really easy to release another zero-day exploit with full source code and a CVE report.
pretty soon they'd get the message.
I think that if we are really trying to play the long game here, then that money would be better spent working towards the development of an open GPU design that we can can use in future risc-v SOCs.
exactly my point.
l.
arm-netbook@lists.phcomp.co.uk