Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Rob wrote: > Is a better solution to meticulously make sure all registers are > initialised when they are declared? Not for the synth code. Most of my unintended metalogic sim values stem from improper testbench stim, not from the synthesis code. I like hold init values on the stim signals until the reset process is complete. Something like: begin -- process tbmain wen_s <= '0'; ren_s <= '0'; burst_s <= '0'; wait until rst_s = '0'; wait until rising_edge(clk_s); ... > or is it perfectly safe not > necessarily poor coding practise? More a question of useful than safe. In a blockram cache, I'd *prefer* to to see my data filling up a block of 'X' bits rather than '0' bits. -- Mike TreselerArticle: 133126
"austin" <austin@xilinx.com> wrote in message news:g3b5pt$sra1@cnn.xsj.xilinx.com... > http://www.xilinx.com/ise/logic_design_prod/webpack.htm > > RTWP*, Some suggest support for SXT; some say there is no such support. Which is correct?Article: 133127
Pete, Do you have a specific part number? I will look into it for you. I know there are some very small omissions/confusions regarding some parts and/or speed grades. For these there are "corrective actions" being worked (to make documentation consistent and accurate). AustinArticle: 133128
On Jun 18, 12:19 pm, John_H <newsgr...@johnhandwork.com> wrote: > Beantown wrote: > > Hello, I am using a Xilinx Spartan-3 FPGA (XC3S200-4), and I am having > > difficulty mapping the Digital Clock Manager (DCM) output onto my > > clock net using a global buffer. > > > My intent is to take my external clock, input it into the DCM, and use > > the double frequency output, X2, to run my internal logic. I used the > > Xilinx Wizard to generate code for the DCM, and I have am instance of > > that DCM in my VHDL file. The Wizard does map the X2 output to a > > Global Buffer, BUFG, but when I assign that output to my clock net, it > > looks like it is on a local routing line. > > > Should I assign my clock net to another BUFG? This looks like a waste > > of resources. I should only have to use one Global Buffer for the X2 > > output. I have attached my VHDL code below. Thanks for the help. > > <snip code> > > What makes you think the clock is local routing? > > You should be able to use FPGA Editor to see the DCM and BUFG > instances and the associated clocks. > > If you see the clock jumping off the global net, perhaps there's a > mixup with the appropriate device half or quadrant that the BUFG > drives versus the location of the driven logic. > > Good luck, > - John_H Hi John, The tool reports that there could be excessive skew on the clk signal. I also have this signal mapped to an IO pin. Maybe that is what the tool is complaining about? Thanks for the suggestion of using the FPGA Editor. I will take a look to see what is actually going on.Article: 133129
"austin" <austin@xilinx.com> wrote in message news:g3bhpv$ss01@cnn.xsj.xilinx.com... > Pete, > > Do you have a specific part number? I was going to develop using the ML506, so that would be the XC5VSX50T FFG1136. Although the actual product might use the FF665C. http://www.xilinx.com/ise/logic_design_prod/webpack.htm mentions support for SXT, but http://www.xilinx.com/publications/prod_mktg/pn0010867.pdf doesn't. > I will look into it for you. Thanks, PeteArticle: 133130
>Is it important to not use an initial condition controlled by a global >reset? In FPGAs this is typically free, or almost free to use it >properly. In ASICs I guess it is a different matter. Metastability. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 133131
Beantown wrote: > > Hi John, > > The tool reports that there could be excessive skew on the clk > signal. I also have this signal mapped to an IO pin. Maybe that is > what the tool is complaining about? > > Thanks for the suggestion of using the FPGA Editor. I will take a > look to see what is actually going on. You probably have it figured out. What I like to do to get clocks to the outputs is use the IOB DDR flops with opposite DC values on the the D inputs for the two phases. I get a clock that stays 100% global and is better correlated to output data generated by that clock. - JohnArticle: 133132
What do you guys think about that?: http://www.embeddedtechjournal.com/articles_2008/20080617_nvidia.htm I heard about Cuda and GPU acceleration for HPC applications before but this time I feel (like the author Kevin Morris) that this solution is getting traction. I know some guys that three years ago tried to use a Nvidia GPU to do FFTs and had to work at the OpenGL level. Not very friendly. Now with Cuda getting more mature (and Telsa getting 64 bits floating point), it looks like it's becoming a nice alternative to FPGAs. PatrickArticle: 133133
Patrick Dubois wrote: > What do you guys think about that?: > http://www.embeddedtechjournal.com/articles_2008/20080617_nvidia.htm > > I heard about Cuda and GPU acceleration for HPC applications before > but this time I feel (like the author Kevin Morris) that this solution > is getting traction. I know some guys that three years ago tried to > use a Nvidia GPU to do FFTs and had to work at the OpenGL level. Not > very friendly. Now with Cuda getting more mature (and Telsa getting 64 > bits floating point), it looks like it's becoming a nice alternative > to FPGAs. and also this http://www.embedded.com/products/softwaretools/208700454 ["SAN JOSE, Calif. — Apple Inc. has submitted the Open Compute Language (OpenCL) to an ad hoc industry group that aims to define a programming environment for applications running across both x86 and graphics chips. The move is one of a growing number of efforts to extend the ubiquitous C language for increasing parallelism in multicore processors.] Next, will we see boards with GPU and FPGA ? -jgArticle: 133134
Pete, There are no V5 SX devices in webpack. AustinArticle: 133135
Hi Faza, There are lots of ways of doing this so I doubt there is such a thing as an industry standard. But here is the process I use. 1) Run a fixed point simulation in matlab using the fixed point toolbox. Since for FPGA design there is a direct relationship between wordlength and logic usage my goal of the fixed point simulation is to find the minimum wordlengths that give the required performance. Wordlength is only half the story though. Determining the right scaling and location of decimal point is the other issue I need to sort out in the fixed point sim. Just imagine a 16 bit unsigned binary number. I can put the decimal point anywhere I want but its a tradeoff between range and quantisation noise. Considering the extremes: with the decimal point at the most left position I can reprsent only a small range of numbers (0->1) but quantisation noise is a minimum. With the decimal point in the right most position I can represent a much larger range (0->65536), but the quantisation noise is now much larger. In general the range must be large enough to avoid overflow, and more and the quantisation noise is larger than it needs to be. 2) Implement the fixed point model in hardware > fixed point=0.211944580078125 or > 16-bit signed integer= 13890 or > fixed point binary =0011011001000010 The key thing to realise, and I think you almost understand it, is that the hardware representation of these numbers is identical - its just your "real world" interpretation of it that is different in each case. The hardware that uses these three representations is the same for each case. The first two numbers just have the decimal point in a different place, the last one you have not specified the decimal point location at all. I use VHDL and use the signed/unsigned data type and I use comments to keep track of the decimal point location. The decimal point is 'implied'. I use the fixed point toolbox fi() function to quantise floating point filter coefficients, and to produce hex strings of the fixed point numbers. The VHDL reads these strings into the design. > binary ?using which tool?I suspect MATLAB fixed point tool will be Yep, Matlab's fixed point toolbox is great for this. type "help fi" and search the matlab help. e.g. a = fi( 0.213412, 1, 16, 15 ) %signed fixed point number, 16 bits, 15 fractional bits. a.bin %get binary value a.int %get integer value Good luck Andrew On Jun 19, 12:09 am, faza <fazulu.v...@gmail.com> wrote: > Hai, > > I want to know which is the right way of implementing and usage of > fixed point number data types in hardware(industry standard)..I have > referred various FIR > implementations where they are mostly handling filter coefficients as > integer(truncating from fixed or floating point using MATLAB) or > binary.Is it difficult to handle and implement real(fraction) numbers > i.e.,filter > coefficients values directly in the hardware? > > for example: > > sample Filter coefficients generated by FDA tool: > > fixed point=0.211944580078125 or > 16-bit signed integer= 13890 or > fixed point binary =0011011001000010 > > all the above are equivalent but belongs to different data type..Now > i > am confused which to select for implementation in my code.. > > Note: > > Fixed point representation is looking challenging for some synthesis > tool as it not supported. > Signed integer looks simple but less accurate > Fixed point binary looks tedious.. > > Pls suggest if anyone knew how to convert fixed point to integer or > binary ?using which tool?I suspect MATLAB fixed point tool will be > useful but i dont know the procedure.. > > regards, > fazaArticle: 133136
"Pete Fraser" <pfraser@covad.net> wrote in message news:6fmdnedUUsw6zsTVnZ2dnUVZ_tfinZ2d@supernews.com... > > "austin" <austin@xilinx.com> wrote in message > news:g3bhpv$ss01@cnn.xsj.xilinx.com... >> Pete, >> >> Do you have a specific part number? > > I was going to develop using the ML506, so that would be > the XC5VSX50T FFG1136. Although the actual product > might use the FF665C. > > http://www.xilinx.com/ise/logic_design_prod/webpack.htm > mentions support for SXT, but > http://www.xilinx.com/publications/prod_mktg/pn0010867.pdf > doesn't. I have the webpack installed. I compared the PDF to device selections for V2P, V4, and V5. The Webpack does not support any SXT. The following devices are selectable in Webpack, but missing in the PDF list: XC5VLX20T XC2VP20 VC2VP30 The V4 device list matches available selections. The Spartan3 list also looks correct.Article: 133137
As commodity PC hardware and prouctivity applications deline in price, EDA tools are as (relatively) expensive as ever, necessitating yet another discussion of "Which simulator is right for me?" The contendors are ... 1) Aldec Active-HDL + great design-flow assistants (state-diagram, block-diagram, waveform-diagram editing, export to PDF) + possibly faster than Modelsim/PE? - no direct support in FPGA design-suites (Webpack/Quartus) - Windoze only (can WINE 1.0 run it?) - Systemverilog is almost but not quite usable ('package' not supported?!?) "less than $6000 for mixed-lang. VHDL+Verilog simulation" (Note, that configuration is the most basic, doesn't have SWIFT/Smartmodel) [first year pepetual-license, yearly maint. is additional 20%/year] 2) Mentor Modelsim PE + currently more solid Systemverilog support than Active-HDL, (but limited to design-constructs, no assertions/coverage) + de-facto industry standard, direct integration into FPGA design-suites (Webpack/Quartus) + SWIFT/Smartmodel support (no extra cost if using mixed-HDL license) - I really don't like the integrated waveform viewer - Windoze only (can WINE 1.0 run it?) "less than $10,000 for mixed-lang. VHDL+Verilog simulation" [first year perpetual-license, yearly maint. is additional 20%/year] 3) FPGA-vendor OEM solution (usually a crippled Modelsim/PE) + cheapest + Altera Modelsim officially supports Linux (Xilinx does not) + Xilinx Modelsim has same level of (design construct) Systemverilog support as Modelsim/PE, quite good actually - limited capacity, deliberately slower runtime performance - term-based only (no perpetual license for Xilinx/Altera?) - no mixed-HDL (VHDL+Verilog) -- deal-killer for me... "less than $1500 for 1-language, 1-year license" If I only had to do 'abstract' RTL-design (algorithm proof, no device-dependent instantiations...) *4) gHDL, Icarus Verilog + free, open-source VHDL, Verilog - emacs/gvim not included - no mixed-HDL (VHDL+Verilog) sim ............. Kidding aside, my real requirements: 1) I foresee mixed-HDL as a *requirement* for any serious consulting job. (Xilinx and Altera are pretty good about providing 'HDL-neutral IP', but third-parties aren't.) 2) ASIC sign-off is obviously not a concern -- who's going to compete with a professional turn-key bureau? 3) Design-size (capacity) is an unknown. For front-end (RTL) simulation, I think even the OEM Modelsims are adequate. But for gate-level, that might push them over the limit. It's interesting that even a 'budget' <$500 FPGA-board already has sufficient gate-capacity to overwhelm a single-designer...progress! 3) validation/qualification with fpga vendor. I like Active-HDL's user-interface more than Modelsim, but I can't escape the fact that Modelsim/PE has wider industry endorsement. It's hard to argue with the management types who're more interested in checkboxes than the less tangibles (oh ... like ... employee productivity?) Finally, I note the irony of Modelsim/Altera and Modelsim/Xilinx editions. Altera Quartus-II supports Systemverilog synthesis, quite well, actually. But Altera's Modelsim is based on the aging 6.1g version, which is regrettably limited. Xilinx Webpack doesn't support Systemverilog, but their Modelsim/XE is based on the more recent 6.3c codebase. I find it useful for testbenching, though too many colleagues heckle me for my systemverilog "religion." (I believe in it, and so should they.)Article: 133138
Jim Granville wrote: > Patrick Dubois wrote: >> What do you guys think about that?: >> http://www.embeddedtechjournal.com/articles_2008/20080617_nvidia.htm >> >> I heard about Cuda and GPU acceleration for HPC applications before >> but this time I feel (like the author Kevin Morris) that this solution >> is getting traction. I know some guys that three years ago tried to >> use a Nvidia GPU to do FFTs and had to work at the OpenGL level. Not >> very friendly. Now with Cuda getting more mature (and Telsa getting 64 >> bits floating point), it looks like it's becoming a nice alternative >> to FPGAs. > > and also this > > http://www.embedded.com/products/softwaretools/208700454 > > ["SAN JOSE, Calif. — Apple Inc. has submitted the Open Compute Language > (OpenCL) to an ad hoc industry group that aims to define a programming > environment for applications running across both x86 and graphics chips. > The move is one of a growing number of efforts to extend the ubiquitous > C language for increasing parallelism in multicore processors.] > > Next, will we see boards with GPU and FPGA ? > > -jg > Speaking of FPGA alternatives, this recently caught my eye. Don't know much about it, but it sure looks cool: http://www.tilera.com/products/processors.php -JeffArticle: 133139
I am running Synplify from the Lattice ispLever tools and every time I compile the design Synplify beeps when it is started and beeps when it finishes. It is a lot louder than the other sounds the computer makes and is a very irritating noise. They have a control to turn it off, but I can't figure out how to make the control stay off from one run to the next. It seems to default to beeping. Anyone know where this configuration is stored? How do I turn it off permanently? RickArticle: 133140
faza wrote: > If i use fixed to integer conversion using left shift operation...I am > not sure i may be thrown with compilation error as the maximum long > int i can is 2^32 only.. > > so if i have such a long 0.99999999999999999998888888888888777777777 > > iit is impossible to have such a big int converted value.. > > so pls suggest for using fixed point binary,,, A number of that precision is probably impractical for a real hardware implementation. You must truncate it. This leads to a tradeoff between how many bits you use in the hardware vs. how closely the real hardware filter matches your desired response. Google for filter+coefficient+truncation -JeffArticle: 133141
In article <a068485f-0e9d-4a93-beb6-cd825ada14e5@j22g2000hsf.googlegroups.com>, rickman <gnuarm@gmail.com> writes: >Anyone know where this configuration is stored? How do I turn it off >permanently? Permanently? :) You are a hardware geek. Right? The classical solution is wire cutters at the speaker terminals. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 133142
SynopsysFPGAexpress wrote: > As commodity PC hardware and prouctivity applications deline in price, EDA > tools are as (relatively) expensive as ever, necessitating yet another > discussion of "Which simulator is right for me?" This reads like a thinly veiled marketing survey. If you actually are a designer, get a proto design ready, order evals of each simulator then try them and see for yourself. > Kidding aside, my real requirements: Which part were you kidding about? > 1) I foresee mixed-HDL as a *requirement* for any serious consulting job. > (Xilinx and Altera are pretty good about providing 'HDL-neutral IP', but > third-parties aren't.) The device vendors are only HDL-neutral because they are selling device netlists, not source code. Not a plus in my book. -- Mike TreselerArticle: 133143
On Wed, 18 Jun 2008 18:01:59 -0700, "SynopsysFPGAexpress" <fpgas@sss.com> wrote: >As commodity PC hardware and prouctivity applications deline in price, EDA >tools are as (relatively) expensive as ever, necessitating yet another >discussion of "Which simulator is right for me?" > >The contendors are ... Don't forget http://fintronic.com/home.html and http://simucad.com/products/verilogSimulation/silos-x.html I personally like Finsim (from Fintronic) a lot. It's a compiled simulator and it's quite fast.Article: 133144
Thnak you for your answer barry. you were right aobut data pins. Actually I haven't read the ucf file till the end. I've just seen the control bits where are assigned and I didn't even asumed that data pins can be asigned latter in the file. I asumed that there are going to be on the sme place in the file. But OK. It took me a little time to see this but Ihave found some new things. Thank you for your answer. Have you maybe ever had a problem when you define your MIG core to use it in the ISE project. I have some stupid proablems that ISE don't want to recognize this core althoug I've included xco file in it (I generate the core from the ISE enviroment so all files should be included automaticly). I just want to test this core. I'have jsut made one vhd file copying parts from vvho file that was generated but no. I have problems that ISE says that it can't find my core. I have to rebuid it, and after that many strange things happen. Which is the worst the error that I am getting is not always the sma.e Sometimes ISE ask from me to include all vhd files generated from MIG core generator. ANd when I do this I get 200 warings saying taht sam parts can't be instated. Strenge. I post one more topic here about this problem to explain this. If you had similar problems, please tell me more about them Thank you very much for all your help Zoran On Jun 18, 5:16=A0pm, Barry <barry...@gmail.com> wrote: > On Jun 17, 7:41=A0am, Zorjak <Zor...@gmail.com> wrote: > > > > > > > Hi > > > I need little help about ISE MIG tool. I have a couple baic questions > > and if someone can answer me I would be very greatfull. > > First thing I wanted to ask is: "does MIG gives me oportunity to > > define data bits aslo. I meant, in the UCF file that is generated at > > the end I can see only control signals. That is ok, yes? than in my > > design, =A0I can define constraints about DATA ports as I want. Am =A0I > > right about this? > > > I also waned to ask one more question. > > I can reserve pin that I don't want to be used by MIG, but how can I > > be sure that pins that it has chosed are same every time I generate > > this core. For example . I want that all NETs are from BANK 1. I put > > these Bank0 Bank2 adn Bank3 as reserved. But how can I be sure that > > all Nets are shosen on the same way every time. Can I reserve all pins > > beisde the ones I want to be used by MIG (reserve also some bits from > > bank 1). IS it OK. But still I have problems if I am not sure that the > > pins are reserved the same time (IF I conect ddr and fpga on the pcb > > I can't change it time to time). > > > and the last I have some strange problem that I didn't get from the > > begging. I reserve all banks except the bank 1. When I want to chose > > pins and when I chech check box indicaitng data pins in bank 1 I get > > this message. > > > =A0"MIG doesn't suport data signals that are from multiple sides limit > > your selection for Data signals for only one side". This confuse me > > totaly. Should I check All pins that are going to be used by mig to be > > on one side? Am I right? > > > I am greatfull for any kind of help. Thanks to everybody. > > Zoran > > I have used MIG2.0, and it definitiely assigned LOC constraints to all > pins, including data bits. =A0It doesn't change pin assignments when you > re-generate the core, unless you change something in the wizard, like > data width or bank selections. =A0I remember some sort of graphical deal > in the wizard for bank selection, and of course you want to stay on > one side of the die, to avoid long route delays. =A0There was no need to > reserve pins. =A0I hope you are reading the User's Guide. > > BarryArticle: 133145
Zorjak wrote: > Hi, > > I have one question about Memory Interface Generator. This problem I > can't solve for a period of time so I decided to ask help. The thing > is this. I generate my Mig core from the project. THis process pass > ok. But when I want to implement my design I have problems. > > > First thing that bugs me is this. When I start implementation of > design I am gettign messages that my core is not recogneze (that my > xco file is not included or vhd or something), but this is inposible > because Istart core generator form my project and it is loaded > automaticly. Anyway, bacause of this ISE ask me to redesign my core so > I have to do this that I can go to the next step. When I finish this, > imlementation process continiues and after while I get this error > > ":NgdBuild:604 - logical block 'u_moj_mig' with type 'moj_mig' could > not be > resolved. A pin name misspelling can cause this, a missing edif or > ngc file, > or the misspelling of a type name. Symbol 'moj_mig' is not > supported in > target 'spartan3a'." I ran in this kind of trouble if XST tries to compile the simulation file of a core. Have a look at the compile log, whether there a errors during compiling your core. If you have an ngc file compiling is not necessary. Tom -- Thomas Reinemann, Dr.-Ing. abaxor engineering GmbH www.abaxor.deArticle: 133146
Jeff Cunningham wrote: > > Speaking of FPGA alternatives, this recently caught my eye. Don't know > much about it, but it sure looks cool: > > http://www.tilera.com/products/processors.php > > -Jeff Jeff, Where have I seen that before? Ah yes, http://en.wikipedia.org/wiki/Transputer Syms.Article: 133147
The first two numbers just have the decimal point in a different place, the last one you have not specified the decimal point location at all. MATLAB FDA tool is generating the binary equivalent without decimal point similar to fixed point tool..I dont know which is correct among MATLAB ..which i have to use..quantization with truncation is also present in FDA tool a =3D fi( 0.213412, 1, 16, 15 ) %signed fixed point number, 16 bits, 15 fractional bits. a.bin %get binary value a.int %get integer value I am wondering if u can guide how to use fprintf for the above as the following gives an error fprintf(outfile1, '%f\n', a); This leads to a tradeoff between how many bits you use in the hardware vs. how closely the real hardware filter matches your desired response. My input bits width is 16 coefficient bit width is 16 output width is 40 regards, faza On Jun 19, 9:20=A0am, Jeff Cunningham <j...@sover.net> wrote: > faza wrote: > > If i use fixed to integer conversion using left shift operation...I am > > not sure i may be thrown with compilation error as the maximum long > > int i can is 2^32 only.. > > > so if i have such a long 0.99999999999999999998888888888888777777777 > > > iit is impossible to have such a big int converted value.. > > > so pls suggest for using fixed point binary,,, > > A number of that precision is probably impractical for a real hardware > implementation. You must truncate it. This leads to a tradeoff between > how many bits you use in the hardware vs. how closely the real hardware > filter matches your desired response. > > Google for filter+coefficient+truncation > > -JeffArticle: 133148
Hi all, When I do Generate Netlist in Xilinx 9.2i, I get the following error - ------------------------- ./synthesis.sh: line 2: $'\r': command not found ./synthesis.sh: line 4: $'\r': command not found ./synthesis.sh: line 6: $'\r': command not found ./synthesis.sh: line 8: $'\r': command not found ERROR:Xst:1688 - Unknown option for -intstyle switch. ./synthesis.sh: line 17: syntax error: unexpected end of file make: *** [implementation/system.ngc] Error 2 Done! --------------------------- Could anyone please help me out in this matter.Article: 133149
On Thu, 19 Jun 2008 10:37:14 +0100, "Symon" wrote: >Where have I seen that before? >Ah yes, http://en.wikipedia.org/wiki/Transputer And now, for bonus points, put an [X] against any of the following reasons if you think it helps to explain why the Transputer was a market flop... [ ] it was at least two decades ahead of its time [ ] it was about one decade ahead of the technology needed to make it powerful enough to be competitive [ ] it was based on robust, sound academic theory [ ] it had a clean, effective, readable programming language that was not C [ ] it was a British design [ ] the software community was even more clueless about how to make use of multiple scalable compute resources in the late 1970s than it is today [ ] when presented with a model that permits parallel processing to be spectacularly efficient, the software community retreats into its standard cosy set of assumptions that serial execution is sure to be faster and more efficient, and therefore Concurrent Is Bad, mainly because it isn't C For maximum credit, write a 10,000 word dissertation explaining why the social dynamics of the software community will ensure the early death of anything that looks like a general-purpose, tightly-coupled network of compute nodes. Not that I'm bitter, or anything like that :-) -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z