Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
yep, and that is the problem. A really useful tool for reconfigurable computing and self hosted incremental compilers using fpga's as computers would have been JHDLbits, a project stalled because the university was (as I understand it) unable to get a release to take the project open source because of NDA with Xilinx. A lot of the technology we could use for compile, load and go supported with dynamic linking for reconfigurable computing with FpgaC has been sitting NDA locked for over a year. Austin Lesea wrote: > JBits, > > Is a Xilinx invention, developed here. > > Austin > > Eric Smith wrote: > > > Phil Tomson wrote: > > > >>It looks like JBits is a University-developed tool. why wouldn't the source > >>code be available? > > > > > > If it really was developed at a university, the university probably signed > > an NDA with Xilinx to get the bitstream details.Article: 94926
Peter Alfke wrote: > Tobias, this subject has been discussed ad nauseam, in this newsgroup > and elsewhere. > The reason for the "secrecy" is not so much fear of giving away secrets > to a competitor, but rather fear of becoming inundated with support > issues. We have about 100,000 designers using our parts, a few dozen > of them exploring and abusing subtle aspects could easily bring our > support hotline (and this newsgroup) to its knees. That is easily handled by a solid policy of "unsupported" features, which can be selectively waved by the company for selected fully paying customers which have volume to merit a response. > Also, the non-open nature of the bitstream provides our customers a > certain level of security against reverse-engineering rip-off. Security by obscurity has never worked ... just look at the weekly exploits to microsoft windows that result largely due to reverse engineering. Any engineering team in the world that can manufacture a cloned product without legal recourse will do exactly that, via reverse engineering if necessary, including die probing live parts if necessary. There just has to be an economic social, or political incentive first. > Our primary obligation is to remain an innovative and profitable > company, to the benefit of our customers, our employees, and our > shareholders. Satisfying exotic academic research is fine, as long as > it does not conflict with the primary obligation. > Just my personal opinion... > Peter Alfke exotic academic, or hobby stage, engineering is where garage innovations create new industries.Article: 94927
Why don=B4t you use a "BUFG" instead of "BUFGMUX"? JP DulliusArticle: 94928
weingart@cs.ualberta.ca (Tobias Weingartner) writes: > In article <QFTyf.3436$bF.2359@dukeread07>, Ray Andraka wrote: > > I've yet to see a compelling need for it that is not addressed by the > > existing tools (there is always XDL if you really want to bit bang). > > Yes, but how to convert XDL into something that I can shoot into the FPGA? > via NCD (xdl -xdl2ndc) and bitgen > But the tools are in the environment of your choosing. > Ahh well, that is a bit of a limitation :-) They may run under dosemu... They will under Wine I seem to recall... You could port that to OpenVMS first :-) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.trw.com/conektArticle: 94929
GMM50 wrote: > Very clever. > > Let's see if I get it. Delay first access till required_wait_time is > met. Then read as fast as possible untill burst length is completed. > Repeat till done. > > I need to look at last address accessed and apply rules for burst > transferrs. > > Thanks > George Looking in my PXA datasheet (I have a board with one, plus flash, SDRAM and all that junk), we find (a couple of the parameters) - I strongly suggest reading the developer manual so you get an idea: MSC0/1 RDN. ROM Delay Next Access: Address to data valid for subsequent access to burst ROM or Flash is equal to (RDNx + 1) memclks. nWE assertion for write accesses to SRAM is equal to (RDFx + 1) memclks. The nOE (nPWE) deassert time between each beat of read/write for Variable Latency I/O is equal to (RDNx + 2) memclks. For variable latency I/O, this number must be greater than or equal to 2. RDF: ROM delay first access. RDF programmed RDF value interpreted 0-11 0-11 12 13 13 15 14 18 15 23 Address to data valid for the first read access from all devices except VLIO is equal to (RDFx + 2) memclks. Address to data valid for subsequent read accesses to non-burst devices is equal to (RDFx + 1) memclks. nWE assertion for write accesses (which are non-burst) to all Flash is equal to (RDFx + 1) memclks. nOE (nPWE) assert time for each beat of read (write) is equal to (RDFx + 1) memclks for Variable Latency I/O (nCS[5:0]). For Variable Latency I/O, RDFx must be greater than or equal to 3. So there are different parameters for the timing for First access and subsequent access - that's what I was doing above. Cheers PeteSArticle: 94930
ptkwt@aracnet.com (Phil Tomson) writes: > In article <uzmltkjm9.fsf@trw.com>, > Martin Thompson <martin.j.thompson@trw.com> wrote: > >ptkwt@aracnet.com (Phil Tomson) writes: > > > >> > >> Where can one find more info on NCD and XDL file formats (and what the > >> acronymns stand for)? Are you implying that if one has this NCD file that one > >> can figure out the bitstream format? > >> > >> Phil > > > > > >As I understand it, the XDL is a textual representation of NCD. The > >NCD is the native circuit database, which has pretty much everything > >required to make a bitstream (logic, placement, routing, startup > >values, BRAM contents etc). If you run "xdl -ncd2xdl" you can get the > >XDL equivalent, hack it about and then regenerate the NCD from the XDL > >and from there go to a bitstream... You can also get a list of all > >the resources in the device using the -report mode of "xdl". > > > >Presumably you could create various small designs in XDL, NCD them and > >then convert to bitstreams and by diffing the bitstream figure out > >what was going on. In theory you could also automoate this... > > > > So xdl come with the webpack? > I believe so... maybe someone who runs Webpack (we have Foundation here) can jump in? Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.trw.com/conektArticle: 94931
Martin Thompson" <martin.j.thompson@trw.com> schrieb im Newsbeitrag news:u8xtcb3o8.fsf@trw.com... > ptkwt@aracnet.com (Phil Tomson) writes: > >> In article <uzmltkjm9.fsf@trw.com>, >> Martin Thompson <martin.j.thompson@trw.com> wrote: >> >ptkwt@aracnet.com (Phil Tomson) writes: >> > >> >> >> >> Where can one find more info on NCD and XDL file formats (and what the >> >> acronymns stand for)? Are you implying that if one has this NCD file >> >> that one >> >> can figure out the bitstream format? >> >> >> >> Phil >> > >> > >> >As I understand it, the XDL is a textual representation of NCD. The >> >NCD is the native circuit database, which has pretty much everything >> >required to make a bitstream (logic, placement, routing, startup >> >values, BRAM contents etc). If you run "xdl -ncd2xdl" you can get the >> >XDL equivalent, hack it about and then regenerate the NCD from the XDL >> >and from there go to a bitstream... You can also get a list of all >> >the resources in the device using the -report mode of "xdl". >> > >> >Presumably you could create various small designs in XDL, NCD them and >> >then convert to bitstreams and by diffing the bitstream figure out >> >what was going on. In theory you could also automoate this... >> > >> >> So xdl come with the webpack? >> > > I believe so... maybe someone who runs Webpack (we have Foundation > here) can jump in? for just the purpose of knowing what is in WebPack I have installed additionally so I can answer : yes the XDL is included with free WebPack but the XDL (or NCD) does __not__ contain bitstream info, it does hold the design info that is not mapped to the bitstream by bitgen later NCD (that can be viewed as XDL after conversion) is used together with BFD (NeoCad Bitstream Format Database ?) file by bitgen for actual bitstream generation. there are some other files for each family, like GRD, etc I am able to view pretty much all of the files used by Xilinx tools, (NGC, NCD, etc) but... the path from NCD to bitstream is not so 1:1, I have written some analyzer software for the BFD to see if there is some visible mapping but the result have been a little confusing so far. oh well I dont have all the for such a play, but the reversing of the bitstream info is for sure doable just need to write some smart analyzer and bit map database auto generation software and let it run for long time to gather the info for you :) -- Antti LukatsArticle: 94932
Is there an easy to disable timing analysis between two unrelated clocks in ISE 7.1? I don't want to have to put in specific ignores for all of the paths. I know that there are no timing errors since one clock is only used to write the registers, and the other clock only uses the contents of the registers when they are stable. Something like ignore all analysis for CLKX to CLKY would be ideal. Thanks, ChrisArticle: 94933
Antti Lukats wrote: > but the reversing of the bitstream info is for sure doable just need > to write some smart analyzer and bit map database auto generation > software and let it run for long time to gather the info for you :) > Antti Lukats You might want to look a little closer at the license for the web pack, and any other license you have ever executed with Xilinx, as it wasn't that long ago that it contained very strong language about reverse engineering proprietary data. With the current DMCA state, the law isn't hardly on the side of fair use for computer software or hardware owners these days. It's terribly like owning a car, but unable to remove the heads to repair a valve without getting sued. Or when the telco's prevented third party phones from being attached to their systems. When finally lifted an entire industry blossomed bringing us cheap cordless phones and digital answering machines that would never have appeared with the PUC mandated 10 year capital recovery limitiation on hardware. Ever wonder why WeCo had to over design the clunky 500 desk set?Article: 94934
or for that matter even high speed consumer modems, as we would have remained stuck in the accustic coupler days.Article: 94935
mail@deeptrace.com wrote: > Is there an easy to disable timing analysis between two unrelated > clocks in ISE 7.1? I don't want to have to put in specific ignores for > all of the paths. I know that there are no timing errors since one > clock is only used to write the registers, and the other clock only > uses the contents of the registers when they are stable. Something > like ignore all analysis for CLKX to CLKY would be ideal. > TIMESPEC "TS_a_to_b" = FROM "clk_a" to "clk_b" TIG; TIMESPEC "TS_b_to_a" = FROM "clk_b" to "clk_a" TIG; The TIG timespec basically means 'timing path ignore'. This should do the trick for you. cheers, aaronArticle: 94936
I probably should add that the whole process for assembling fpga bits streams is optimized poorly for reconfigurable computing - if not outright wrong and backward. In hardware design you break the design into one or more clock domains and fine tune the designs timing constraints in those clock domains. For reconfigurable computing, as a comodity general purpose processor engine exactly the opposite process needs to occur. The Hardware WILL have fixed clock rates available, and maybe even a few setable ones, but in general the compiler needs to match code blocks to available clock periods, even if it's mismatched by as much as a factor of 2-5. With some basic worst case timing data for the target fpga's this is easily done, and the compiler can bin sort statement blocks into clock domains on the fly, and emit synchronizers when necessary to cross clock domains. This allows statement groups which have horrible combinatorial delays to run at one clock speed, and other statement groups with very flat netlists and little to no combinatorial delays to run at a faster clock rate ... "all the compiler needs" is some clues about general timing costs and the actual runtime target capabilities. In some cases this even can be done by fixups by the 'runtime linker" just prior to loading. The practial clocking environment for this would be a series of edge synchronized clocks spread a factor of two, three, four or five in time (so rising edges always match) to avoid synchronizers completely. In such a target execution environment, the one hot state machine gating the execution flow can freely cross clock domains. Some with a runtime life in days can surely have it's clocks fine tuned, but for most applications this granularity is quite reasonable. Now, how practical its it to hand a netlist with 8 interleaved clock domains to your favorite vendors place and route tools, and get back verification of setup and hold times for this environment?Article: 94937
<fpga_toys@yahoo.com> schrieb im Newsbeitrag news:1137688374.809626.35200@o13g2000cwo.googlegroups.com... > > Antti Lukats wrote: >> but the reversing of the bitstream info is for sure doable just need >> to write some smart analyzer and bit map database auto generation >> software and let it run for long time to gather the info for you :) >> Antti Lukats > > You might want to look a little closer at the license for the web pack, > and any other license you have ever executed with Xilinx, as it wasn't > that long ago that it contained very strong language about reverse > engineering proprietary data. > > With the current DMCA state, the law isn't hardly on the side of fair > use for computer software or hardware owners these days. It's > terribly like owning a car, but unable to remove the heads to repair > a valve without getting sued. > > Or when the telco's prevented third party phones from being attached > to their systems. When finally lifted an entire industry blossomed > bringing > us cheap cordless phones and digital answering machines that would > never have appeared with the PUC mandated 10 year capital recovery > limitiation on hardware. Ever wonder why WeCo had to over design the > clunky 500 desk set? > well I am not doing anything, I just know what can or could be done :) [pretty much anything...] Atmel bitstream info is all known and its fully runtime reconfigurable so it makes way more sense to go with Atmel FPGA/FPSLIC if someones wants self or dynamicall reconfiguring FPGA systems. AnttiArticle: 94938
Exactly what I wanted. I didn't realize that it was OK to have a FROM/TO with just named clock nets. I tried it and it got rid of a slew of non-errors. Thanks, ChrisArticle: 94939
What board do you have? 1 line : Is that Digilent programming cable?Article: 94940
Hi, nice info... can someone tell me if the USB cable works on Linux? Ivan Andreas Ehliar wrote: > I just installed ISE 8.1 on Linux and these are my first > impressions: > > * Project Navigator finally feels like a native Linux > program. Previous versions often felt unresponsive and > slow. With this version I no longer feel an immediate > urge to build everything with Makefiles. This is great! > > * Impact does not work out of the box with kernel > version 2.6.15.1. I had to download linuxdrivers2.6.tar.gz > and compile it. Furthermore, I had to edit the configure > script in windrvr and make sure that UDEV was not used. > (The udev interface seems to have changed in later 2.6.x > series. The relevant symbols are also GPL-only now, so I don't > think a binary only module can be distributed using UDEV in later > 2.6.x kernels.) > I also had to install fxload to download the firmware to the > programming cable and make sure /proc/bus/usb was mounted. > > All in all, I got it to work, but I really wish that Xilinx > could remove the dependence on windriver. It is a real nuisance > if you have to upgrade your kernel for whatever reason since you > will need to recompile the kernel module in that case. If you > happen to use parallel cable III or IV you can use XC3Sprog instead. > You have to modify the program somewhat if you want to use it with > Virtex-II FPGA:s. (You have to make sure that it recognizes the FPGA.) > I haven't tested V2P or V4 FPGA:s with it though. > > > * Test benches seem to be handled much more sanely in Project > Navigator. You can now for each source file decide if it should > be used for simulation, synthesis or both. > > > /AndreasArticle: 94941
Hi, I just used MIG to generate the memory access interface to the DDR memory. But now I am confused as to how to used the files? Should I include in my project the .vhd files and then declare a component of the library and write my own accessing code to the interface? Or whatever? Does anybody know whether there is any reference design on this? Thank you so much! RogerArticle: 94942
"agou" <agou.win@gmail.com> schrieb im Newsbeitrag news:1137696646.484554.7120@g43g2000cwa.googlegroups.com... > Hi, > > I just used MIG to generate the memory access interface to the DDR > memory. But now I am confused as to how to used the files? > > Should I include in my project the .vhd files and then declare a > component of the library and write my own accessing code to the > interface? Or whatever? > > Does anybody know whether there is any reference design on this? > > Thank you so much! > Roger > we had the same issue and when we asked help here no one responded. so we had the same issue, generated a DDR2 thing with MIG, then implemented it in FPGA connected the status LED and then what? I was looking at the MIG datasheet, and it was clearly saying that the DCR (dynamic command reqest) on the upper address lines is optional, but it actually is not dont care, it does matter. so for the DDR2 you plave 101 or 100 on upper address bíts (for read or write) set address set AF_WREN and then the DDR2 IP core works but there doesnt seem to be any reference design at all so you need some kind of glue yourself the above is for DDR2 generated by MIG, the DDR may be little different but you most likely also need the same toplevel glue and check the datasheet for dynamic command, etc AnttiArticle: 94943
Yes, I feed a clock output pin. I tried with BUFG and BUFGCE and they don't work. I guess the reason is that they are only used to distribute the clock signal to synchronous elements, not to output pads. For the moment, the only solution I found to work is the FDDRCPE suggested by Symon. I just wonder if there is anything simpler than that. Regards. JL.Article: 94944
"Eli Hughes" <emh203@psu.edu> schrieb im Newsbeitrag news:dqlpfm$1cd4$1@f04n12.cac.psu.edu... > When can we expect an EDK that corresponds to the newest ISE release? should be this month, unless it gets delayed more than expected. AnttiArticle: 94945
"Paul Hartke" <phartke@Stanford.EDU> schrieb im Newsbeitrag news:43CE7CB7.E8330EDA@Stanford.EDU... > I've looked into doing this several times but never found a working > solution. > > I ended up creating my own HDL for the memory that instantiates BRAM > primitives and wrote a corresponding bmm file as decribed in the > documentation. That's worked fine even if it is a little more hassle. > > Paul > > langwadt@ieee.org wrote: >> >> Hi, >> >> Does anyone have a hint on how to get data2bram and coregen memory to >> work together? >> >> I have an SoC with some 32bit memory made up of four 8bit memories >> generated with coregen, I've made a bmm file that defines the memory. >> I can run data2bram with the bmm file and an .elf file and if I set the >> output to verilog the init strings look resonable. >> If I run the same bmm and .elf file on my bit file and use the updated >> bit file to configure an FPGA DONE doesn't go high so I assume the bit >> file is corrupt. >> >> Is there a trick I should know about ? >> >> data2bram does give me a warning the the memory is not LOC'ed, is there >> a simple way to >> get that info when the memory is generated with coregen? >> >> (xc2v3000 and ISE5.1) >> >> -Lasse data2bram seems to work properly only when used by the EDK. the BMM file that it requires *must* have LOC info on possibility is to LOC the BRAMs in the UCF and then inject the same LOC info into BMM manually, doable but hassle the BMM issue has been a problem all the time, maybe its better in 8.1 havent checked but so far it really hasnt be any joy when attempting to use it AnttiArticle: 94946
Hi, Antti Thanks! I am really a newbie on FPGA. So could you tell me how did you make it? Did you imported the project all the .vhd files? And then declare yourself the component and ports? Design the logic to start the read and write? And so on? bla bla Thanks! RogerArticle: 94947
JL wrote: > Yes, I feed a clock output pin. > > I tried with BUFG and BUFGCE and they don't work. I guess the reason is > that they are only used to distribute the clock signal to synchronous > elements, not to output pads. > > For the moment, the only solution I found to work is the FDDRCPE > suggested by Symon. I just wonder if there is anything simpler than > that. Easier? Really, it is not that hard is it ;) Anyway, you probably want to use the DDR method. There is no direct connection from the clock net to the output PADs, so what ends up happening if you don't use the DDR method is that the clock signal gets routed over some local routing paths. And the delay to the pad then depends on the pin getting routed to, and even can vary from one PAR session to the next. Assuming you are also outputting some other signals (usually the case), you now have the problem of having an unpredicatable relation between the output clock and the other signals. With the DDR method, the delay will never vary, regardless of the pin or PAR session. And the relation to other signals will be fixed; that is, one edge of the clock will almost exactly match the data transition, and the other edge will always fall almost exactly in the middle of the data.Article: 94948
"agou" <agou.win@gmail.com> schrieb im Newsbeitrag news:1137698558.814088.127940@g43g2000cwa.googlegroups.com... > Hi, Antti > > Thanks! I am really a newbie on FPGA. So could you tell me how did you > make it? Did you imported the project all the .vhd files? And then > declare yourself the component and ports? Design the logic to start the > read and write? And so on? bla bla > > Thanks! > Roger > well I dont know the DDR with MIG only the DDR2, in that case it generates the FPGA top test module that includes a special FPGA synthesizeable test unit. the test unit makes some writes and reads and has an LED for pass fail status. so I just created a ISE project and added all the files, fixed the UCF and looked if my status LED is on or off. for MIG generated DDR I dont know if the testbench is generated or not, so if not then you need to write it yourself. if you are really an FPGA newbie then its better not fuzzle around with MIG if you only need DDR, the reason we used MIG was the support for DDR2 memories. for DDR just use EDK to generate a system and verify that it works. Later you can decicde what DDR memory core to use. AnttiArticle: 94949
As far as I can tell, the model is that a bmm file without placement info is fed to ngdbuild with the -bm option. Each BRAM is associated with a hierarchical name and its location in the memory address space. That info is passed through the design flow until bitgen. Bitgen recognizes this info in the design database and creates a new *_bd.bmm file which includes the hierarchical names and the physical location information. data2mem uses the memory address space info and the physical location information to insert the elf/mem into the fpga bitstream. EDK generates the original bmm; currently coregen does not and its not clear what the BRAM hierarchical names are in the internal database. There is no need to manually look into floorplanner and/or ucf LOCs to use data2mem if you can figure out what the BRAM hierarchical names are. Since coregen doesn't automatically export a bmm with its internal hierarchical names, the floorplanner *.fnf file can be used to determine them. Doing it this way allows the placement to change and the backend flow keeps working once you know exactly what coregen memories are needed. Paul Antti Lukats wrote: > > data2bram seems to work properly only when used by the EDK. > > the BMM file that it requires *must* have LOC info > > on possibility is to LOC the BRAMs in the UCF and then inject the > same LOC info into BMM manually, doable but hassle > > the BMM issue has been a problem all the time, maybe its better > in 8.1 havent checked but so far it really hasnt be any joy when > attempting to use it > > Antti
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z