Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
www.vmetro.com has a board with a Virtex5 on board!!!Article: 114526
John wrote: > In vhdl is it possible to run a process on both edges of a clock? > I tried running one if-statement on the rising edge, and one on > the negative but get an timing error. The following VHDL code will work in simulation. It will also work in synthesis with XST in those devices that have dual edge flip-flops in the architecture (i.e. Coolrunner). I have not tried it with synplicity. process(vclk) begin if (vclk'event) then ... end if; end process; This will not work with any of the Xilinx FPGAs, as there are no internal dual edge flip flops. It will probably not work on the DDR output registers, as they are not really dual edge, but rather two flops and mux. Regards, Erik. --- Erik Widding President Birger Engineering, Inc. (mail) 100 Boylston St #1070; Boston, MA 02116 (voice) 617.695.9233 (fax) 617.695.9234 (web) http://www.birger.comArticle: 114527
Hi, I am tryint to include an ethernet app in my design, but always failed. So I try to probe the code example generated by BSB when you add an Ethernet_MAC. Then I plug a crossover cable between PC and Board but the TestApp_Example fails in Ethernet: Test Failed!!!. What kind of Ethernet_MAC configuration must I try?? Is the connection between PC and Board ok or I have to connect the board to the network?? ThanksArticle: 114528
Hi all! I am experiencing a very strange and rather frustrating problem while trying to run the same backannotated simulation in two different versions of Modelsim. In both cases I am using exactly the very same files for everything, and also the same compilation and simulation commands. The old version (Modelsim 5.8b) simulates fine and gives the expected results, while a newer release (Modelsim 6.2e) gives erroneous results. The following caption explains the situation (copy and paste the following link in your browser): http://images.elektroda.net/5_1169063659.jpg In the figure at the link above you can see the two simulations: ****[Upper part of image: Modelsim 5.8b - Results: OK, as expected] Notice pulse on signal "RNet7947' begining @ 10263ns and falling @ 10353ns. This is an update signal that is correctly fetched by the clock "RNet7665" @ 10353ns. The good, expected behavior is that the data in signal "WIR_shift" is updated to signal "PER/WIR", as shown. ****[Lower part of image: Modelsim 6.2b - Results: WRONG, unexpected] Notice pulse on signal "RNet7947' this time begins @ 10264ns and falls @ 10354ns, too early to be fetched by the clock "RNet7665" (see carefully). Thus, the signal "WIR_shift" is NOT updated to signal "PER/WIR", which remains in "000", as shown. I would be the most grateful if anybody could please give a clue to what is going wrong here. I REALLY need to get version 6.2e to run correctly with my design. The simulation command issued in both versions is as follows: vsim -sdfmin /ram_16kx16_tap_top_tb_p1500/dfm_0=DFM_TC_Best.pt.sdf -sdfnoerror -sdfnowarn -t ns +mindelays work.ram_16kx16_tap_top_tb_p1500 In both cases I am using the same SDF file, with has the following header: (DELAYFILE (SDFVERSION "OVI 3.0") (DESIGN "DFM_TC") (DATE "Fri Nov 24 16:01:11 2006") (VENDOR "COREM10LPHVT CORXM10LPHVT IOLIB_65_M10_CUP_FT_TEST_1V2 PRM10 splpll_16Kx8") (PROGRAM "Synopsys PrimeTime") (VERSION "V-2003.12-SP1-3") (DIVIDER /) // OPERATING CONDITION "Best::Best" (VOLTAGE 1.32::1.32) (PROCESS "0.8000::0.8000") (TEMPERATURE -40.00::-40.00) (TIMESCALE 1ns) ... Thanks in advance!!! Regards, JL.Article: 114529
Brad, Thanks for the response. I will try to outline what I am doing with some specific concerns I have: Basically, I am working on behalf of a company that develops compute intensive algorithms for biological applications using a SW programming language like C/C++. That company is trying to get a performance boost by mapping the same algorithms onto a Hardware Platform like an FPGA or ASIC or anything in between. Main idea is to see if we can get a minimum of 10X-20X speedup versus a software implementation. Here are some of my basic concerns: (1) Many of the end-customers now use laptops as their only computers, sometimes with a docking station and/or external keyboard and monitor when they're at their desks. How would a endcustomer implement such a hardware solution? Would it come as a plug-in card for a card slot (unusable on a laptop), or in some format that would enable using it on a laptop? How about a standalone box? (2) Assuming one is able to connect the HW implementation to a laptop, how would the end customer feed the input files. Note that in some apps, the input file is ASCII text, while in other apps, it may be binary files in a proprietary format. How does the output of the simulation be collected? Wd it be redirected to an ASCII text file? (3) What happens when the algorithm needs to be updated? Is there a way to "update" the hardware (such as an FPGA), or does is it mean the hardware becomes obsolete and must be replace (if so, at what kind of cost to an end user)? (4) Hardware/Software Partitioning: Can various "core" functions be programmed into the hardware while still allowing other functions to be in software in order to provide flexibility in the mathematical models? If so, is the potential speed advantage still high? (5) Can you shed some light on how one can translate existing code from C/C++ to a HW platform? What tools would be used, how would the design be verified, and how long does it take to get a working demo version? (6) What about if the existing code is in a proprietary language, other than C/C++? Is it possible to translate into a HW mapping in that case? (7) Finally, to get a demo/working prototype, what do you recommend, FPGA, or ASIC or something in between and why? If you had to take a stab at guessing the cost for developing such a prototype, what would it be? Assume about 100,000 lines of existing code in C/C++. Thanks AnandArticle: 114530
module clock_div3 ( clock_in, clock_out ); input clock_in; output clock_out; reg clock_out; reg [2:1] d_pos; reg [2:1] d_neg; always @ (posedge clock_in) case (d_pos) 2'b00: d_pos[2:1] <= 2'b01; 2'b01: d_pos[2:1] <= 2'b11; default: d_pos[2:1] <= 2'b00; endcase always @ (negedge clock_in) case (d_neg) 2'b00: if (d_pos[1]) d_neg[2:1] <= 2'b01; 2'b01: d_neg[2:1] <= 2'b10; default: d_neg[2:1] <= 2'b00; endcase always @ (posedge clock_in or posedge (d_neg[1] & !clock_in)) if (d_neg[1] & !clock_in) clock_out <= 1'b0; else if (!d_pos[1]) clock_out <= 1'b1; endmoduleArticle: 114531
Charles, NG wrote: > * slaves only need to monitor for (HTRANS == NSEQ) && (HSEL == '1') && > (HREADY == '1') to know whether they are selected. it should also check "HTRANS == SEQ" :-) JosephArticle: 114532
I see all these references to my old article in XCell magazine, and I enjoy the positive comments. But: In almost all cases, there is no need for 50% duty cycle. The natural 33/66% duty cycle of a simple divide-by-three circuit is acceptable, especially at such low frequencies as 70 MHz. Here is one of the simplest implementations: Two flip-flops QA and QB, QA feeds the D-input of QB (shift register) The NOR of QA and QB feeds the D input of QA. This circuit also recovers from the illegal state of both QA and QB being High. Peter Alfke On Jan 18, 9:26 am, "visiblepulse" <t...@visiblepulse.com> wrote: > module clock_div3 > ( > clock_in, > clock_out > ); > > input clock_in; > output clock_out; > > reg clock_out; > reg [2:1] d_pos; > reg [2:1] d_neg; > > always @ (posedge clock_in) > case (d_pos) > 2'b00: d_pos[2:1] <= 2'b01; > 2'b01: d_pos[2:1] <= 2'b11; > default: d_pos[2:1] <= 2'b00; > endcase > > always @ (negedge clock_in) > case (d_neg) > 2'b00: if (d_pos[1]) d_neg[2:1] <= 2'b01; > 2'b01: d_neg[2:1] <= 2'b10; > default: d_neg[2:1] <= 2'b00; > endcase > > always @ (posedge clock_in or posedge (d_neg[1] & !clock_in)) > if (d_neg[1] & !clock_in) > clock_out <= 1'b0; > else > if (!d_pos[1]) clock_out <= 1'b1; > > endmoduleArticle: 114533
To fix the error you can also patch the xsim.h file under the Xilinx installation directory and use your brand new gcc 4.1 that ships with Suse 10.1 Bye, Antonio.Article: 114534
Hi Surya, Are you aware that the Xilinx Virtex-4 and Virtex-5 FPGA have an embedded EMAC block? Here is a great article to serve as a starting point: http://www.xilinx.com/publications/xcellonline/xcell_59/xc_pdf/p054-056_59-McKay.pdf -David "Surya" <aswingopalan@gmail.com> wrote in message news:1169033036.685087.68370@51g2000cwl.googlegroups.com... > > Is it possible to interface the Ethernet directly to the FPGA instead > of the doing it through the Power PC processor or any other Processor? > If yes, kindly throw some light on the same. > > thanks in advance. >Article: 114535
Say in an if-statement I assign a value by: "temp <= FPGA_input". This if statement is only ran once. Will "temp" always be mapped to FPGA input? Or do I have to set temp to the input every time I expect the input to change? Also, if I declare a pin an "inout", how do I switch between the two? Sometimes an external device will write to my FPGA, othertimes it will read.Article: 114536
Neil Steiner wrote: > I'm trying to understand the behavior of the REV pin works on Virtex2 > flops. The Virtex2 Pro/Pro-X data sheet (ds083.pdf, Module 2, > Functional Description, Logic Resources -- page 36 in v4.5) states the > following: > > "When SR is used, a second input (REV) forces the storage element into > the opposite state. The reset condition predominates over the set > condition." > > I'm not sure that I understand what that's trying to tell me. Is the > following correct, or does "the reset condition predominates" mean > something else? > > Synchronous Mode: > 1. Asserting SR forces state to the selected SRHIGH/SRLOW on appropriate > clock edge. > 2. Asserting SR and REV forces state to inverse of SRHIGH/SRLOW on > appropriate clock edge. > > Asynchronous Mode: > 3. Asserting SR forces state to SRHIGH/SRLOW immediately. > 4. Asserting SR and REV forces state to inverse of SRHIGH/SRLOW > immediately. > > For 4, is there some required timing relationship between SR and REV? > Since REV apparently does nothing on its own, does it just need to > remain asserted until SR gets deasserted? The SR pin takes precedence over the REV pin when asserted, so statements 2 and 4 above are incorrect. They should start with "Deasserting SR and asserting REV". Both pins have a setup time to the CLK pin that must be met in synchronous mode to operate correctly. Ed McGettigan -- Xilinx Inc.Article: 114537
Hi Jeff, Thanks for your message, but I'm trying to get debugging to successfully *start* reliably, not finish. Finishing is no problem. I would like to be able to select "Run" and have execution successfully stop on main() every time, instead of giving me a blank screen and "Stopped at fffffffb" at the bottom. It should say "Stopped at 26" or something like that, at the bottom, which it finally does after the multiple tries described by me previously. Any ideas? All I do, and all you should have to do, is click on the little "bug in a box" to launch XMD, and then the bug not in a box to launch GDB, which is the way it works when it's working. I build my bit file with the bootloop and load it with iMPACT, then use those bug icons in EDK, which eventually works. What are you supposed to know to do to make it work reliably, -- which I'm eventually doing, since it finally works, but I still don't know what it is I'm doing that's finally making it work. Any ideas? Please see my previous messages for more description of my struggles: http://groups.google.com/group/comp.arch.fpga/browse_thread/thread/5ed87d5b3adc5ead/8da849bf884ba4b7?lnk=st&q=&rnum=1& Regards, -Jim Jeff Cunningham wrote: > Jhlw wrote: > > > XMD, then GDB, I finally have success and execution > > stops at the first breakpoint at main() and I'm ready to > > debug. This shouldn't be such a pain. If it's something > > If I'm not mistaken, if you hit the "step out of" button at this point > it will run to completion. > -JeffArticle: 114538
I just now tried, in XMD, after closing GDB when I got this problem: disconnect 0 connect ppc hw and tried GDB again, and this time it started properly, with my code in the window, stopped at the first brace of main(), and "Program stopped at line 26" showing on the bottom frame of the GDB window. So maybe that's it. After connecting with XMD, do the above disconnect and reconnect. See previous messages in this thread, at http://groups.google.com/group/comp.arch.fpga/browse_thread/thread/5ed87d5b3adc5ead/8da849bf884ba4b7?lnk=st&q=&rnum=1& -Jim Jhlw wrote: > Hi Jeff, > > Thanks for your message, but I'm trying to get debugging > to successfully *start* reliably, not finish. Finishing is no > problem. I would like to be able to select "Run" and have > execution successfully stop on main() every time, instead > of giving me a blank screen and "Stopped at fffffffb" at the > bottom. It should say "Stopped at 26" or something like that, > at the bottom, which it finally does after the multiple tries > described by me previously. Any ideas? All I do, and all > you should have to do, is click on the little "bug in a box" > to launch XMD, and then the bug not in a box to launch > GDB, which is the way it works when it's working. I build > my bit file with the bootloop and load it with iMPACT, then > use those bug icons in EDK, which eventually works. What > are you supposed to know to do to make it work reliably, > -- which I'm eventually doing, since it finally works, but I still > don't > know what it is I'm doing that's finally making it work. Any > ideas? Please see my previous messages for more description > of my struggles: > http://groups.google.com/group/comp.arch.fpga/browse_thread/thread/5ed87d5b3adc5ead/8da849bf884ba4b7?lnk=st&q=&rnum=1& > > Regards, > -Jim > > Jeff Cunningham wrote: > > Jhlw wrote: > > > > > XMD, then GDB, I finally have success and execution > > > stops at the first breakpoint at main() and I'm ready to > > > debug. This shouldn't be such a pain. If it's something > > > > If I'm not mistaken, if you hit the "step out of" button at this point > > it will run to completion. > > -JeffArticle: 114539
Using an embedded processor is the natural way to use these interfaces, why do you want to omit it? Also, the embedded hard EMACs are really meant for GBit ethernet, sure you can use them for 10/100 but why? Surya wrote: > Is it possible to interface the Ethernet directly to the FPGA instead > of the doing it through the Power PC processor or any other Processor? > If yes, kindly throw some light on the same. > > thanks in advance.Article: 114540
OK, I have been asked, many times, how an ASIC "gate" compares to a FPGA "gate." Now don't just groan, and hit ignore, bear with me (if you have an opinion or feel like a comment). An ASIC "gate" (in my feeble mind) is 4 transistors, arranged as a NOR, or a NAND. From that basic element, you can make everything else, or at least represent the complexity of everything else. Now take a FPGA. Look at the LUT. Take the 4 LUT in Virtex 4. It is 16 memory cells. Is that 32 "gates"? What happens when you use it as a 16 bit LUTRAM, or SRL16? Isn't that closer to 64 "gates"? If I use the LUT as a 2 input NAND gate, then it is one "gate" and I have to use some LUTs as small gates, so I obviously can't count all my LUTs as 64 gates! Take the DCM. How many "gates" would it take to do that? The DSP48. How many "gates"? So, I have always decided to stay away from any serious engineering evaluation of "gates" vs "gates" as being a no-win discussion. But is it? Is there no real comparison that can be made? Obviously, people use FPGAs. And, they use ASICs. Sometimes they do one, and then replace it (oh my!) with another. Is a 2 million "gate" ASIC equal to a XC4VLX25? or a XC4VLX200? Or, not even the largest FPGA we can make (XC5VLX330)? I have seen customers "fit" their 2 million "gate" ASIC into a LX25, so from just that one customer's point of view, 2 million "gates" could be realized by 24,000 4-LUTS, 24,000 DFF's, 1.3 Mb of BRAM, 8 DCMs, and 48 DSP48 blocks. That is about 6 million bits of configuration bits. Is the answer a "range" from 2 million gates (depending on who is doing the conversion) can sometimes go into FPGAs that range in size of 5:1? 10:1? AustinArticle: 114541
#1 - Why is one signal rising at 10264ns, and the other at 10263ns? Trace them back, see where they differ; that'll be your answer. #2 - Who wrote the relevant simulation models? Is there a race in them? #3 - Why are you set at 1ns resolution? Is this a *very* slow chip? I bet your sim models are set to 1ps; I'd suspect a rounding problem somewhere. #4 - What do you get if you enable sdf warnings & errors? #5 - Is anyone in comp.arch.embedded really going to know the answer to this one?! /PJArticle: 114542
Try: Xilinx: http://www.dinigroup.com/index.php?product=DN9000k10PCI (Virtex-5) http://www.dinigroup.com/DN8000k10pci.php (Virtex-4) http://www.dinigroup.com/DN8000k10psx.php (Virtex-4 -- can have SX55) Altera: http://www.dinigroup.com/DN7000k10pci.php I'm not certain what you mean by '15 million gates' The largest FPGA that is shipping today is the V5 LX330, and it is, at most, 2 million ASIC gates. And the LX330 isn't really 'shipping'.Article: 114543
What I've found is that in Verilog, what gets passed in as module parameters has to be a constant. I wanted to use genvar's but that is not allowed. The workaround was to write another Ruby script that would generate the Verilog code itself. Instead of using the generate statement I pre-generate each module with Ruby. This creates a large Verilog file but the parameters are now constant and the synthesis tool is happy. On Jan 8, 2:02 pm, "KJ" <kkjenni...@sbcglobal.net> wrote: > "matteo" <matt.fisch...@gmail.com> wrote in messagenews:1168023629.092115.51520@51g2000cwl.googlegroups.com... > > >I have some Verilog code that generates an array of blockRAMs to any > > dimensions that I want. For example, if I set LENGTH=3 and HEIGHT=4 > > then 12 blockRAMs get synthesized. I have a separate script written in > > Ruby that creates the initial blockRAM contents according to the > > position of each blockRAM in the array. What I'd like to do is be able > > to pass in the blockRAM init parameters dynamically as each blockRAM is > > generated. > > > I'm looking for suggestions on how to do this. Can XST execute and > > interact with scripts from the command line?Is there some reason why the code that generates the initial contents is > written in Ruby? More to the point, is there some reason that this Ruby > program can not be converted to VHDL or Verilog bundled up as a function > that will return the memory contents? > > Kevin JenningsArticle: 114544
"mike_la_jolla" <mdini@dinigroup.com> wrote in message news:1169165229.839110.250530@11g2000cwr.googlegroups.com... > > I'm not certain what you mean by '15 million gates' The largest FPGA > that is shipping today is the V5 LX330, and it is, at most, 2 million > ASIC gates. And the LX330 isn't really 'shipping'. > Mike, You'll never get a job in FPGA marketing. Everyone who has knows that every single bit of dual port memory is approximately 10.23756 gates. And furthermore, as soon as you put anything in inverted commas, it becomes 'true'. By which I mean 'false'. 'HTH', SymsArticle: 114545
As a side note on the gate count, you might want to see what Austin Lesea posted today, "gate"=??? (if you have not already) "mike_la_jolla" <mdini@dinigroup.com> wrote in message news:1169165229.839110.250530@11g2000cwr.googlegroups.com... > Try: > > Xilinx: > http://www.dinigroup.com/index.php?product=DN9000k10PCI (Virtex-5) > http://www.dinigroup.com/DN8000k10pci.php (Virtex-4) > http://www.dinigroup.com/DN8000k10psx.php (Virtex-4 -- can have > SX55) > > Altera: > http://www.dinigroup.com/DN7000k10pci.php > > I'm not certain what you mean by '15 million gates' The largest FPGA > that is shipping today is the V5 LX330, and it is, at most, 2 million > ASIC gates. And the LX330 isn't really 'shipping'. >Article: 114546
Austin, With the plethora of customer designs Xilinx has archived for software benchmarking, could the gate-to-LUT utilization be determined "on average?" There would be different classes of designs in the same sense there are different versions of the Virtex-5 family. Logic-only designs would have a very different equivalent gate count than the heavy signal processing solution which would also be different than light signal processing. If the LUTs were broken out, the gate-to-LUT ratio should cover a significantly more constrained range than the 5:1 or 10:1 range that you (rightfully) suggest. If the designs can typically fit well to only 90%, the gate-to-LUT ratio should be reduced or a note made as to the number of LUTs useable in a "typical" design, knowing that the values might move one way or another. If someone wants Xilinx to go the route of specifying gates, I'd suggest including the gate count for the dedicated blocks such as the DSP48, PPC405, or DCMs *as long as* the functionality is roughly equivalent to what would be implemented in an ASIC. If the ASIC has an "average" DSP48 equivalent that only uses 30% of the DSP48 functionality, a derating would make sense here. The ASIC-to-FPGA designer should be able to estimate the DSP48, BlockRAM, DCM usage and such but would only have a gate count to estimate the logic without going through real target compiles. If I were trying to compare equivalent devices, I'd want to know based on average designs and use those statistics to guide my decisions. I haven't had to rely on gate counts yet in anything I've done so I'm not worried. But for those engineers who are interested (not marketing or management folks, but engineers) it doesn't matter what can be fit into a LUT, it matters how the LUTs and other functions actually get used across the device. A mention of the best and worst gate-to-LUT ratios in actual designs would be worthy of mention in addition to the typical values. Gates are only truly helpful (in my opinion, of course) for comparing dissimilara architectures such as ASICs to FPGAs or even 6-input LUT based FPGAs versus 4-input LUTs. What matters to me isn't the capability, it's the use. - John_H "Austin Lesea" <austin@xilinx.com> wrote in message news:eoot3a$bu15@cnn.xsj.xilinx.com... > OK, > > I have been asked, many times, how an ASIC "gate" compares to a FPGA > "gate." > > Now don't just groan, and hit ignore, bear with me (if you have an > opinion or feel like a comment). > > An ASIC "gate" (in my feeble mind) is 4 transistors, arranged as a NOR, > or a NAND. From that basic element, you can make everything else, or at > least represent the complexity of everything else. > > Now take a FPGA. Look at the LUT. Take the 4 LUT in Virtex 4. It is > 16 memory cells. Is that 32 "gates"? What happens when you use it as a > 16 bit LUTRAM, or SRL16? Isn't that closer to 64 "gates"? > > If I use the LUT as a 2 input NAND gate, then it is one "gate" and I > have to use some LUTs as small gates, so I obviously can't count all my > LUTs as 64 gates! > > Take the DCM. How many "gates" would it take to do that? > > The DSP48. How many "gates"? > > So, I have always decided to stay away from any serious engineering > evaluation of "gates" vs "gates" as being a no-win discussion. But is > it? Is there no real comparison that can be made? > > Obviously, people use FPGAs. And, they use ASICs. Sometimes they do > one, and then replace it (oh my!) with another. Is a 2 million "gate" > ASIC equal to a XC4VLX25? or a XC4VLX200? Or, not even the largest FPGA > we can make (XC5VLX330)? > > I have seen customers "fit" their 2 million "gate" ASIC into a LX25, so > from just that one customer's point of view, 2 million "gates" could be > realized by 24,000 4-LUTS, 24,000 DFF's, 1.3 Mb of BRAM, 8 DCMs, and 48 > DSP48 blocks. That is about 6 million bits of configuration bits. > > Is the answer a "range" from 2 million gates (depending on who is doing > the conversion) can sometimes go into FPGAs that range in size of 5:1? > 10:1? > > AustinArticle: 114547
On Thu, 18 Jan 2007 14:41:46 -0800, Austin Lesea <austin@xilinx.com> wrote: >OK, > >I have been asked, many times, how an ASIC "gate" compares to a FPGA "gate." > >Now don't just groan, and hit ignore, bear with me (if you have an >opinion or feel like a comment). > >An ASIC "gate" (in my feeble mind) is 4 transistors, arranged as a NOR, >or a NAND. From that basic element, you can make everything else, or at >least represent the complexity of everything else. > Yes but an ASIC is the base plus 6 to 10 layers of metal. >Now take a FPGA. Look at the LUT. Take the 4 LUT in Virtex 4. It is >16 memory cells. Is that 32 "gates"? What happens when you use it as a >16 bit LUTRAM, or SRL16? Isn't that closer to 64 "gates"? > ARM sram memory compilers can go as low as 16 by 4 bit I believe so it's probably a wash here. >If I use the LUT as a 2 input NAND gate, then it is one "gate" and I >have to use some LUTs as small gates, so I obviously can't count all my >LUTs as 64 gates! > >Take the DCM. How many "gates" would it take to do that? > Hmm, maybe a DCM ip block? >The DSP48. How many "gates"? This actually we can quantify. You can implement exact functionality in RTL and synthesize to ASIC and get a number of gates and then you can go to Arithmatica & get a cellmath optimized version for reduced power & area; sounds familiar? >I have seen customers "fit" their 2 million "gate" ASIC into a LX25, so >from just that one customer's point of view, 2 million "gates" could be >realized by 24,000 4-LUTS, 24,000 DFF's, 1.3 Mb of BRAM, 8 DCMs, and 48 >DSP48 blocks. That is about 6 million bits of configuration bits. > My fater has a saying "two elephants for a quarter is a good deal if you have a quarter and if you need two elephants". The issue is how many people need 48 DSP48 blocks. They are very nice if you need them but most designs don't >Is the answer a "range" from 2 million gates (depending on who is doing >the conversion) can sometimes go into FPGAs that range in size of 5:1? >10:1? or 1:10 ? I have a block which synthesized to 1.5M instances and slightly less than 4M gates on a 130 nm process. It has no memories, no multipliers and a heck of a weird connectivity. When I tried it with the latest synthesizer from the best fpga synthesis company I know, it told me that I needed more than 9 4VLX200s or 6 5VLX220s (tried just for the heck of it) at 1 MHz. A vast majority of the LUT resources were being used as routing resources.Article: 114548
The hyperlinks in Xilinx' Timing Analyzer do not go to valid web pages. I am using 8.2 with the latest SP. I've done a search on individual delays and found some hits, but is there a definitive destination on the web site that I can find all the definitions?Article: 114549
Dear David, Thank you for your link. It was good. I was aware of the EMACs present in the Virtex 4 and 5. But i was wondering whether it would be efficient to write the protocol handler to (removing and addition of header and footer in simple terms) in the FPGA directly or in the PPC. If it is in PPC i would not be able to use Virtex 5 and hence the EMAC. Kindly advise. Aswin davide wrote: > Hi Surya, > > Are you aware that the Xilinx Virtex-4 and Virtex-5 FPGA have an embedded > EMAC block? Here is a great article to serve as a starting point: > http://www.xilinx.com/publications/xcellonline/xcell_59/xc_pdf/p054-056_59-McKay.pdf > > -David > > "Surya" <aswingopalan@gmail.com> wrote in message > news:1169033036.685087.68370@51g2000cwl.googlegroups.com... > > > > Is it possible to interface the Ethernet directly to the FPGA instead > > of the doing it through the Power PC processor or any other Processor? > > If yes, kindly throw some light on the same. > > > > thanks in advance. > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z