Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Antti wrote: > without determination it all works brillantly without fuzz in winXP ;) Sure, and soldiers desert from wars and go over to the enemy side all the time. But they'll have to live with themselves for the rest of their lives...:^). -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 107226
I've already hooked all the tcp/ip API in Cableserver, and dump all the recv, send buffers to the screen, so now if I run Cable server in debug mode, I can see the communication for all the messages between Impact and CableServer. The protocoll seems very simple especially in debug mode I can see what is the message, and what are the parameters. Unfortunately this weekend i'm going for a short vacation :) But I'll jump on this when I get back. Zoltan PS: Here are couple of dumps for sampling: Main -> GET INFO pName=LPT, pNumber=0, speed = 200000, cName=Parallel III. send 01 send 03 00 00 00 send 4C 50 54 send 00 00 00 00 send 40 0D 03 00 send 0C 00 00 00 send 50 61 72 61 6C 6C 65 6C 20 49 49 49 send 00 00 00 00 send 01 WSARecv 21 WSARecv 80 00 00 00 00 00 WSARecv 00 WSARecv 00 00 00 00 00 WSARecv 00 BScn -> WRITE mode=128, writeLength=0, readLength=0. send 01 WSARecv 26 WSARecv 0C 00 00 00 05 00 00 00 BScn -> PULSE PIN 12 count=5.Article: 107227
I've already hooked all the tcp/ip API in Cableserver, and dump all the recv, send buffers to the screen, so now if I run Cable server in debug mode, I can see the communication for all the messages between Impact and CableServer. The protocoll seems very simple especially in debug mode I can see what is the message, and what are the parameters. Unfortunately this weekend i'm going for a short vacation :) But I'll jump on this when I get back. Zoltan PS: Here are couple of dumps for sampling: Main -> GET INFO pName=LPT, pNumber=0, speed = 200000, cName=Parallel III. send 01 send 03 00 00 00 send 4C 50 54 send 00 00 00 00 send 40 0D 03 00 send 0C 00 00 00 send 50 61 72 61 6C 6C 65 6C 20 49 49 49 send 00 00 00 00 send 01 WSARecv 21 WSARecv 80 00 00 00 00 00 WSARecv 00 WSARecv 00 00 00 00 00 WSARecv 00 BScn -> WRITE mode=128, writeLength=0, readLength=0. send 01 WSARecv 26 WSARecv 0C 00 00 00 05 00 00 00 BScn -> PULSE PIN 12 count=5.Article: 107228
BTW I'm a hardcore Windows guys so this soldiers/enemy example wasnt't funny at all :)) David Ashley wrote: > Antti wrote: > > without determination it all works brillantly without fuzz in winXP ;) > > Sure, and soldiers desert from wars and go over to the enemy > side all the time. But they'll have to live with themselves > for the rest of their lives...:^). > > -Dave > > -- > David Ashley http://www.xdr.com/dash > Embedded linux, device drivers, system architectureArticle: 107229
Antti wrote: > PPC caches __can__ be initialized > 1) from BIT file using the USR_ACCESS (and bridge IP to JTAG master) > 2) from ACE using PPC ICE registers directly > > but sure, the documentation and tools todo this > are not the very best == means possible months > of time wasted to get it all working properly. > > but it is possible. I looked at the USR_ACCESS > to JTAG gateway ip core, and I also know enough > about the undocumented PPC ICE registers that > I am confident its all doable. Undocumented registers, I sure wish that I don't have to dig that deep ;) The trick that is used with the UC2 + SystemAce is the loading of cache through JTAG commands. I'm not sure of the implementation details though, as the UC2 comes in "prepackaged" ngc files. XAPP575 is the app note dealing with the UC2. It all seems real nice, but it doesn't work for me... Thanks Antti. PatrickArticle: 107230
David Ashley wrote: > Totally_lost is fpga_toys? I just saw recent posts by both. > Do people use sock puppets in this newsgroup? Yes, and Yes (if you call handles sock puppets). You will find that I used Totally_Lost as my handle on sf.net to create the FpgaC project, and used that same handle here to anounce the start of the FpgaC project last October. I have used it on Ebay since March 1998, and in other forums, bbs, and lists since the late 1970's. Fpga_Toys is relatively new, last spring. There are other posters that post from multiple email addresses, often work/home, some with different username/handles at the different sites. Not all export their fully legal name for both.Article: 107231
Antti wrote: > Suzie schrieb: > > > I'm developing on an ML403 evaluation board with a Virtex-4 device. > > I'm calling Xilinx's Level 0 I2C driver routines (XIic_Send, _Recv) > > from a PPC405 program running under the QNX OS. I'm connecting to an > > external I2C device, a temp sensor/ADC, via the J3 header on the ML403. > > > > When scoping the I2C SDA and SCL lines, I often notice a missing bit > > within the 8-bit address word. Obviously, when this happens, the > > addressed device does not ACK the transfer. > > > > I believe that my physical I2C connection is correct because I can > > successfully and consistently use the GPIO-I2C bit-banging approach (as > > implemented in Xilinx's iic_eeprom test program) to communicate with my > > external device. > > > > I'm not sure how my operating environment or the driver could cause > > this problem. The address is supplied by a single byte-write to the > > OPB_IIC core's Tx FIFO register; that seems atomic to me. My gut > > feeling is that there is a problem with the core. > > > > Anyone seen this problem, or know what I might be doing wrong?? > > no, but I see another problem with the OPB_IIC core. > no matter I set the clock scaler, etc the OPB_IIC core just > gives me 650KHz clock out. on SDA line. > > I have managed I think to get the OPB_IIC working too one > a long time ago, but bitbang is way EASIER and always works. > > Antti Are you saying that the output frequency of the SCL line is 650KHz? Mine is set for the default value of 100KHz, and the measured frequency is slightly lower at at approximately 90KHz. I'm using the v1.01c and v1.01d cores; what are you using?Article: 107232
Patrick, > I still wish that I could use the UC2, as I don't really want to deal > with the OPB/PLB buses right now. Creating a full-blown PowerPC design > in EDK doesn't seem like an easy task. It is actually very easy to create a basic design using the wizard. It is certainly much easier than to deal with any non-standard configuration such as UC2. > I also > would like to keep my BRAMs free, as I'm doing large FFTs which make > heavy use of BRAMs. I can't argue with this, but if your code fits into 8K, your BRAM usage won't be huge... > Another point in favor of the UC2 is that I have a really simple > SystemC model of the UC2 right now. I can do relatively fast > co-simulations of my design. I heard that using the Swift models to > simulate the PowerPC is really slow... Being able to run full simulation is cool, but do you really need to simulate it at all? After all nobody runs Pentium Verilog simulation to debug a Windows application... Finally, you can always go back to UC2 when you know more... /MikhailArticle: 107233
On a sunny day (25 Aug 2006 19:15:47 +0200) it happened David Ashley <dash@nowhere.net.dont.email.me> wrote in <44ef3043$1_2@x-privat.org>: .... >So the whole thesis is this: >1) With silicon speed we're running up against limits of how fast >we can operate cpu's at. >2) Special purpose cpus like DSP's can achieve huge performance >improvements, but getting them require tradeoffs a compiler can't >make -- they require human decision making processes. >3) With reconfigurable computing (FPGA's) in theory you can get >massive performance improvements by just implementing the >desired algorithm into the fabric of an FPGA. > >So what has to happen is to make it painless for engineers to >be able to use FPGAs as a resource, on the fly, within applications. >If that means using high level 'c' like languages and smart >compilers, and if it's in theory possible, then go for it. It is in my view a bit more complicated. As you mentioned, we are heading for multi core stuff. But just that, *exactly that*, is something where you need to work on the top level. A recent example is Cell processor. I have studied Cell now a bit in depth, and there is Linux development stuff for it (Sony PS3 is coming). Yet to get the most of it, you have to write your applications so you assign specific tasks to each 'help processor'. Do it in C or asm, but you still need to know the hardware structure, where the bottle-necks are, bandwidth etc... You mentioned crypto, well I did the crypto thing for brute force with FPGA, in the end the gain may not be so high, unless you use really big chips and unfold all loops (DES for example). When you unfold all loops the Verilog becomes so simple you will _want_ to write in Verilog, not in 'C'!! This is because these algos actually come from a hardware philosophy, can be made easily with gates... Not 'sequential C code like' at all. Well that is my experience with my cracker... So all that makes me say: OK if you want sequential code and run Linux on a FPGA, but like you mention, it is perhaps better to have the plug in FPGA board for the high speed stuff, with lower bandwidth connection to a normal processor that runs sequential code and is the perhaps programmed in C or a higher level language. >Right now >using FPGAs for stuff is very pain*ful* compared to doing the >same thing in software. No, it is not( ;-) ), use the right tool for the right thing :-), FPGA is fun, state machines, if you must use them, are not so bad either. >That's got to change. [But] You have got to learn. Maybe you are right, maybe not, but to fly a jet requires training.... Speaking some other languages then your own requires learning. Maybe one day we have this implant so we can all do this.... speak 25 languages, fly a 747... As you pointed out, in case you really know the details and start optimising, as in that DSP case, it is fun :-) But for some it is a mystery, too difficult. But _you_, by effort, determine what group you are in.Article: 107234
Suzie schrieb: > Antti wrote: > > Suzie schrieb: > > > > > I'm developing on an ML403 evaluation board with a Virtex-4 device. > > > I'm calling Xilinx's Level 0 I2C driver routines (XIic_Send, _Recv) > > > from a PPC405 program running under the QNX OS. I'm connecting to an > > > external I2C device, a temp sensor/ADC, via the J3 header on the ML403. > > > > > > When scoping the I2C SDA and SCL lines, I often notice a missing bit > > > within the 8-bit address word. Obviously, when this happens, the > > > addressed device does not ACK the transfer. > > > > > > I believe that my physical I2C connection is correct because I can > > > successfully and consistently use the GPIO-I2C bit-banging approach (as > > > implemented in Xilinx's iic_eeprom test program) to communicate with my > > > external device. > > > > > > I'm not sure how my operating environment or the driver could cause > > > this problem. The address is supplied by a single byte-write to the > > > OPB_IIC core's Tx FIFO register; that seems atomic to me. My gut > > > feeling is that there is a problem with the core. > > > > > > Anyone seen this problem, or know what I might be doing wrong?? > > > > no, but I see another problem with the OPB_IIC core. > > no matter I set the clock scaler, etc the OPB_IIC core just > > gives me 650KHz clock out. on SDA line. > > > > I have managed I think to get the OPB_IIC working too one > > a long time ago, but bitbang is way EASIER and always works. > > > > Antti > > Are you saying that the output frequency of the SCL line is 650KHz? > Mine is set for the default value of 100KHz, and the measured frequency > is slightly lower at at approximately 90KHz. > > I'm using the v1.01c and v1.01d cores; what are you using? sorry, no. I tried the OPC_IIC and for some reason the core was totally crazy. there was continous free running clock on SDA line all the time. something was badly wrong. nothing todo with your problem. keep looking! -- AnttiArticle: 107235
Thank you everyone for your quick and informative responses! In the event of a large number of bit errors, is it possible for a comma character to be incorrectly introduced? What would the effect be? Will the CRC fail and put the state machine into an error state, or will data continue to transmit, unaware? Ed McGettigan wrote: > vt2001cpe wrote: > > Anyone have experience with directly driving a cable with RocketIO? I > > am interested in any information/experiences/advice regarding linking > > two FPGAs via RocketIO over a cable. I have seen some signal > > characterization information for high-speed links over copper, but > > usually less than 800Mhz. I believe my implementation would use a a > > less than 1 meter, but would like to know it works at 3, 5, > > 10...meters. Ideally I would like to run the link at 10gbits, but > > 6gbits could work. How feasible is this, or is it back to the drawing > > board? > > Running RocketIO over a cable isn't a problem, but of course it depends > on the cable characteristics, the data rate and acceptable losses. We > have a number of reports on tests that were done with Virtex-II Pro > RocketIO links: > > Infiniband - http://direct.xilinx.com/bvdocs/reports/ug043.pdf > SATA - http://direct.xilinx.com/bvdocs/reports/rpt005.pdf > CAT5/5e/6 - http://direct.xilinx.com/bvdocs/reports/rpt004.pdf > > Out of all of these the Infiniband style cables are the best and come in > a variety of lengths and number of channels. I don't have any experience > with PCI Express cables, but that they should be just as good since they > use the same internal cable technology as the Infiniband cables. > > http://www.molex.com/cgi-bin/bv/molex/jsp/family/intro.jsp?superFamOID=-16583&pageTitle=Introduction&channel=Products&familyOID=-16641&chanName=family&frellink=Introduction > > Ed McGettigan > -- > Xilinx Inc.Article: 107236
MM wrote: > It is actually very easy to create a basic design using the wizard. It is > certainly much easier than to deal with any non-standard configuration such > as UC2. It seemed easier to use the UC2 at the time (which is one of the reason the UC2 was created in the first place, I think), but now I would probably agree with you that the standard flow is easier. > I can't argue with this, but if your code fits into 8K, your BRAM usage > won't be huge... Agreed, maybe 8-10 BRAMs I guess. > Being able to run full simulation is cool, but do you really need to > simulate it at all? After all nobody runs Pentium Verilog simulation to > debug a Windows application... Well, most of my design is hardware. The UC2 is only kind of a big state machine. The simulation is more meant to validate the hardware design. I agree that there is usually no need to simulate the hardware to debug the software. Without the UC2 simulation though, it would be harder for me to debug the hardware design. I guess that I could build a testbench, generating Wishbone bus transactions for each module. But it's just easier to use SystemC to link everything together. Thanks again Mikhail. PatrickArticle: 107237
Jan Panteltje wrote: > There are many many cases where ASM on a micro controller is to be preferred. > Not only for code-size, but also for speed, and _because of_ simplicity. > > For example PIC asm (Microchip) is so simple, and universal, and > there is so much library stuff available, that it is, at least > for me the _only_ choice for simple embedded projects. > No way 'C'. > Yes hardware engineers add sometimes great functionality with a few lines of > ASM or maybe Verilog, cool! Hi Jan, .... one size certainly doesn't fit all, especially where the tools are poor or lacking. Having written Asm since mid 1960's in a grand scale on over 30 architectures, I'm certainly not asm adverse when needed. I have however strongly advocated C since first writing device drivers for UNIX machines starting in 1975. Learning to write low level "symbolic asm" in a C compiler is a needed skill for anyone doing machine level coding optimization. Debugging compilers on new architectures is also painful, and requires solid skills in machine languages. As does doing cross architecture operating systems porting, as bringing up both new hardware and new software tools (asm, linker, compilers, libraries) is more an exercise in debugging at the machine level than it is high level design. I made my living do hardware/software at that level for over 30 years, and still do today. One the other hand, I've watched to asm bigots fall one by one over the last 30 years, as entire industries switch from asm coding as the defacto standard to HLL's. All of your arguments are the same they made. In the end, the HLL tools mature, reach nearly hand coding levels, and a slow migration occurs from 100% asm, to 20%, asm after the first C recode, to 10%, to 5%, to under a small fraction of 1%. There are minimum machine thresholds where HLL's are viable, and that isn't an embedded micro with 64 bytes of r/w registers/memory and a few Kbytes/bits of program rom. On the other hand, the price difference between a highly capable micro with resources to easily support 99% HLL coding and a bare bones chip is getting pretty small, and frequently when you factor devevelopment costs and life cycle maintainence costs into the cost of the product, the difference is actually negative .... IE the tiny chip is MUCH MUCH more expensive in both real costs and time to market costs as compared to a large micro using HLL development tools, well trained HLL development engineers, and clean readable/maintainable well structured C/HLL (such as Forth). I helped Ron Cain with UNIX resources and design ideas back in the late 1970's to do "Small C' for the micro's of that day ... not pretty asm, but functional. I've also worked with tiny embedded micro's running Forth, that knocked off applications in weeks that would have taken years to debug in asm, if you could get it to fit. I've also done hardware design for more than 30 years, including more than my share of PLD work using early PAL's and PALASM level tools. There is a good reason VHDL/Verilog exist .... and as many good reasons they should not be the mainstay development tools at the current low level they provide in two decades from now ... especially for FPGA's. It's with more than a weekend warriors experience, that I see C on FPGAs making a huge difference for both hardware design, and reconfigurable computing. > > I guess it is a matter of learning, I started programming micros with > switches and 0010 1000 etc, watch the clock cycles.. you know. > Teaches you not to make any mistakes, as re-programming an EPROM took > 15 to 20 minutes erase time first... > Being _on_ the hardware (registers) omits the question of 'what did that > compiler do', in many cases gives you more flexibility. > C already puts some barrier, special versions of C for each micro > support special functions in these micros.... I started programming on 4K 1401 card machines, with at most 5-10 test runs a week. Desk checking was everything ... and having a full work day between runs pretty much insured it. Program complexity is however far past what even a good coder can get running in one or two test cycles these days. So ... been there, done that, and it sucks rocks. As does large scale bit level design today. > In spite of what I just wrote .. anyways why wants everybody all of the > sudden Linux in FPGA? So they can then write in C? > Have it slower then in a cheap mobo ? Dunno ... didn't advocate that, especially not for LUT level FPGA work, as there are not enough LUT's on the planet to implement linux as a netlist. > Ok, OTOH I appreciate the efforts for a higher level programming, as long > as the one who uses it also knows the lower level. > That sort of defuses your argument that 'engineers need less training' or > something like that, the thing will have to interface to the outside world > too, C or not. First I don't exactly make that arguement, but I do make the arguement that fewer numbers of engineers need that level of training ... something that greatly separates the newbie's from the experienced. Organizationally that makes a huge difference in costs, time to market, maintainability, and survivability in a global market. It also means that you don't let experienced designers create job security by coding at a level only they can maintain, except where critically necessary and there exists other staff available in house, or as consultants, to carry the torch long term.Article: 107238
Jan Panteltje wrote: > You mentioned crypto, well I did the crypto thing for brute force with > FPGA, in the end the gain may not be so high, unless you use really > big chips and unfold all loops (DES for example). > When you unfold all loops the Verilog becomes so simple you will _want_ > to write in Verilog, not in 'C'!! Hardly ... been there, done that, and know better .... having written FpgaC versions of both the RC5 and AES engines using loop unrolled pipelined code. Both projects brought out problems where FpgaC didn't implement things right, or did a poor job ... both of which provided the insights about what needs to be added to FpgaC or fixed, in Beta-3 and Beta-4 this fall. It's also why I used this thread to ask for specific examples of things that are difficult to express in C ... to continue that process. > This is because these algos actually come from a hardware philosophy, > can be made easily with gates... Not 'sequential C code like' at all. BEEP .... wrong. The Algos are frequently math based, first implemented in C, and later ported to hardware. > Well that is my experience with my cracker... I've done my share of cracking since the early 1970 ... including a major achidemicly sponsored two years of machine level disassembly of an XDS940 derivative operating system and significant identification of both implementation and architectural design flaws that were exploitable for kernel level access. Ditto with reverse engineering of a number of other projects, and an equal interest in crypto cracking at a number of levels. > So all that makes me say: OK if you want sequential code and run > Linux on a FPGA, but like you mention, it is perhaps better to have the > plug in FPGA board for the high speed stuff, with lower bandwidth connection > to a normal processor that runs sequential code and is the perhaps programmed > in C or a higher level language. I think that running linux as netlists is crazy, .... building FPGA computers with embedded hard/soft cores that run linux, is well, natural progression.Article: 107239
I am looking for a way to read/write to a SATA drive from an FPGA. I've looked around. Nothing seems to fit the bill. Any ideas worth considering? Thanks, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin To send private email: email = x@y.com where: x = "martineu" y = "pacbell.net"Article: 107240
edaudio2000@yahoo.co.uk wrote: > Is there a difference between Quartus "Web Edition" and "Web Edition > Full"? The big difference is file size. > Does anybody know what may be going on? For some reason the large NIOS version downloads without restriction while the smaller plain version now requires registration on the website before download. Don't know why. -- Mike TreselerArticle: 107241
Martin E. schrieb: > I am looking for a way to read/write to a SATA drive from an FPGA. I've > looked around. Nothing seems to fit the bill. Any ideas worth considering? > > Thanks, > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Martin Hi Martin, good question :) ML300 and digilent XUP V2Pro both have SATA connectors on them but can not actually be used for SATA as of compliance issues. (OOB and CDR lock range mainly) ASFAIK those issues are no longer present with V4FX that should be fully SATA compliant without external workarounds. So you can just get the ML410 and start working :) sure you would still need the IP core from some vendor though AnttiArticle: 107242
Ray Andraka wrote: > real world high density high clock rate > designs tend to have average toggle rates of 20% or less. Bit serial > designs have toggle rates that are a bit higher, but still usually well > under 50%. I don't see dissipations more than about 20-25W, which can > be handled with proper thermal design on any of the large FPGAs. In > most cases, I'd say the average dissipation I've been seeing on large > aggressive designs (2V6000, V4SX55, 2P70) is between 10 and 13 Watts. As a reference point, bit toggle rates of high density bit serial LUT SRL designs are SIGNIFICANTLY higher, and depending on the statistical distribution of 1's and 0's in the data easily hover around 75% or better. Doing designs which ignore the data specific toggle rates by simply assuming toggle rates based on traditional parallel designs is folly at best. The RC5 design was interleaved and pipelined with NO parallel operations. The nature of the RC5 data stream after the first stage is highly random, IE 50% of the data has a changes state on each clock, 25% of the data changes state at half the clock rate, and 12.5% of the data changes state at 1/4 the clock rate, ... to an average of near 75%, and short term variances that are statistically frequent much higher than that. The design was interleaved word wise, because the barrel shifters when implemented in SRL's required a full word latency, that also doubled the number of SRL's needed to retime the 26 stage SBOX delay for the pipeline. As a result, the design ran with a toggle rate that was well over your offered typical of "usually well under 50%". Most bit serial brute force crypto designs will see the same high average toggle rates. The same was true of the high density heat simulation model based on LUT SRL's and bit serial MAC's (inspired by your web page). The floating point data, with an assumed leading one, and normalized, has a nearly random toggle rate in the bit serial stream. It was necessary again to word interleave the engine to facilitate parallel loading of a multiplier, which shifted the ratio of LUT SRL's up to retime the interleaved data. And again, toggle rates overall were well above 50%, actually near 75%, and nothing like your "usually well under 50%" guideline. Reflecting on these two "normal" bit serial designs, I would suggest that heavily pipelined bit seral crypto and floating point math engines which are hand packed, and highly replicated, will easily exceed the 50% toggle rates. This is especially compounded when considering the serial shift costs of LUT SRL's. Replacing LUT SRL's with LUT RAM's certainly helps by removing the SRL bit shifting power, but doesn't lower the design toggle rates below 50% ... as they remain around 75% unless there are significant data dependencies that cause significant runs of zero's or one's. The problem is that 75% becomes the average power, and there are certainly valid data patterns what will occur which are higher for brief periods (clocks for several word latencies). This is particularly true where the whole design is syncronized by the same data seed, or a single variable is shared by all engines that is worst case ... 1010101, at which point the number of bits changing state will go to nearly 100% for that word length in bits, many clocks ... and possibly a far higher power limit than the device, package, or PCB decoupling can support if designed based on a "usually well under 50%" rule of thumb. Cooling design can float thru this, power design can not. I do worst case design when at all possible ... that is difficult with Xilinx parts, as the full specification provided doesn't come close to answering the questions regarding peak VCCINT pin currents in the short period following a clock transistion for a specific design and data. Or how those currents translate into VCCINT power drops at the die. Best case, for a large FPGA in a large package, using a single clock, the global clock network skew will diffuse the current peaks. That however also slows the design down, prompting the designer to segment the clock network into multiple domains, and in the process remove the skew and increase the number of gates transitioning right after the clock so that the majority are well away from the next clock edge. This implies that the 100W of peak average power is now time compressed to a small number of time points that are clustered around typical LUT propagation times and typical high density regular routing propagation times. This time synchronization which is a natural side effect of optimizing the design for performance, also then increases the probability of instantaneous current spikes that are several times the long term average ... many by as much as a factor of 10. As Austin puts it ... a power "THUMP" that may leave the device unstable, and should be part of doing good worst case FPGA designs from my perspective. Especially for reconfigurable computing FPGA engines, where worst case designs are probable in this respect, and certainly not "unlikely".Article: 107243
zcsizmadia@gmail.com wrote: > BTW I'm a hardcore Windows guys so this soldiers/enemy example wasnt't > funny at all :)) You're also a top poster sometimes :^). But anyway it's not directed at you, if I were to sell out and move to windows I'd be betraying the cause. You and most of the world are perfectly free to be in the windows camp. More power to you. I use windows when I have to but I'm just so much more effective when I can use linux. -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 107244
fpga_toys@yahoo.com wrote: >>Do people use sock puppets in this newsgroup? > Yes, and Yes (if you call handles sock puppets). Sock puppet, from wikipedia: http://en.wikipedia.org/wiki/Sockpuppet_%28internet%29 Sockpuppet (sometimes known also as a mule, glove puppet, or joke account) is an additional account created by an existing member of an Internet community pretending to be a separate person. This is done so as to manufacture the illusion of support in a vote or argument or to act without social effect on one's "main" account. This behaviour is often seen as dishonest by online communities and as a result these individuals are often labeled as trolls. I only recently became aware of the term myself. -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 107245
David Ashley schrieb: > zcsizmadia@gmail.com wrote: > > BTW I'm a hardcore Windows guys so this soldiers/enemy example wasnt't > > funny at all :)) > > You're also a top poster sometimes :^). But anyway it's not > directed at you, if I were to sell out and move to windows I'd > be betraying the cause. You and most of the world are > perfectly free to be in the windows camp. More power to you. > > I use windows when I have to but I'm just so much more > effective when I can use linux. > > -Dave > > -- > David Ashley http://www.xdr.com/dash > Embedded linux, device drivers, system architecture there are different excuses, reasons for everything, as example I use WinXP, while: 1) it is capable to start multiply instances of "norton commander" !!! well sure yet not very good at it, but manageable. yes I have looked at midnight commander too, but I still stick to my old tools, eg the original NC (or actually its clone FAR) AnttiArticle: 107246
Jan Panteltje wrote: >>Right now >>using FPGAs for stuff is very pain*ful* compared to doing the >>same thing in software. > > > No, it is not( ;-) ), use the right tool for the right thing :-), FPGA > is fun, state machines, if you must use them, are not so bad either. When I say painful I'm mostly speaking of the turnaround time from source change to test. 30 minutes to try something else on actual hardware -- from a software development point of view that's a lot of time. From a VHDL developer depending on the project that can actually be considered *fast*...I just wish the build process wasn't so slow. It's like batch processing again. VHDL development itself *won't* be painful. I'm not familiar enough with it yet so it's a little slow going. But this is as you say just part of paying the price for learning a new skillset. > 25 languages, fly a 747... As you pointed out, in case you really know > the details and start optimising, as in that DSP case, it is fun :-) > But for some it is a mystery, too difficult. I'm glad you mentioned that it is fun. It was very fun doing that work. This stuff to a lot of people is just work. VHDL and FPGA stuff is a taste of the wonder and joy of discovery I had when I was 10 and learning 8080 assembly language. The last time that fun with learning occurred was when I was doing some OpenGL programming for the first time. Anyway to get back onto the subject, what would be really cool is if you could just code your program in 'c' or whatever language, and some magical development tool would break it up into stuff to go onto an fpga and even if that takes hours to build, you only have to do it *once*, and it is guaranteed to work the first time. Then the masses would be able to use this technology for day to day stuff. -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 107247
David Ashley wrote: > This is done so as to manufacture the > illusion of support in a vote or argument or to act without > social effect on one's "main" account. This behaviour is > often seen as dishonest by online communities and as a > result these individuals are often labeled as trolls. I certainly try to make sure I login to the same handle for a particular thread, sometimes I may not. I've not seen much here that indicates people are purposefully manimpulating the discussion by concurrently using difference handles.Article: 107248
fpga_toys@yahoo.com wrote: > There are other posters that post from multiple email addresses, often > work/home, some with different username/handles at the different sites. > Not all export their fully legal name for both. I have multiple accounts myself, but I list my name in all. Frankly I find it childish to make up new names for yourself. Always makes me wonder what you're trying to hide. As for your use of handles elsewhere, why exactly would we know or care? TommyArticle: 107249
David Ashley wrote: > Anyway to get back onto the subject, what would be really > cool is if you could just code your program in 'c' or whatever > language, and some magical development tool would break > it up into stuff to go onto an fpga and even if that takes hours > to build, you only have to do it *once*, and it is guaranteed to > work the first time. Then the masses would be able to use > this technology for day to day stuff. The whole goal of creating an ANSI syntax and semantics C out of the FpgaC project is exactly that. Design, test, debug, in a normal "C" environment, and run under FpgaC on FPGA's. It's gone a little slower than I would like, as few people have good skill sets to join the "fun" doing the FpgaC development at this stage, and I have late life kids still in school to support.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z