Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Thank! If I send my projekt to you e-mail, can you see his? I fight at this problem 3 days....:) Projekt compiled succesful if ignored this warning (if disable "run design rules checker(DRC)"). But in EDK when bild softvare - give error analogous that: http://groups.google.ru/group/comp.arch.fpga/browse_thread/thread/6c22e816b4b65c7d/1576b93ac8fc645e?hl=ru&lnk=st&q=region+ilmb_cntlr_dlmb_cntlr+is+full#1576b93ac8fc645e send me letter if you can see my projekt (projekt consist from microblaze+GPIO_LEDS+PCI_Bridge)Article: 128101
On Jan 15, 3:58=A0am, "HT-Lab" <han...@ht-lab.com> wrote: > "FPGA" <FPGA.unkn...@gmail.com> wrote in message > > news:113ff161-5e90-43f8-9c06-7da9ca92db65@s12g2000prg.googlegroups.com... > > > Hello all, > > > I am trying to use some of the proposed functions by IEEE which are > > still awaiting approval. > >http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/float_pkg_c.vhdl > > .. > > > I am using ModelsimPE Student Edition 6.3c > > I am not sure about the Student Edition but in the commercial PE version (= I > use 6.3d) these packages are already supplied as standard. Have a look in > your <your_mti_installation_dir>\vhdl_src\floatfixlib\ directory. > > In my PE version they are compiled into floatfixlib: > > Library floatfixlib; > use floatfixlib.float_pkg.all; > > signal A32,B32:float(8 downto -23); > signal Y32 : float(8 downto -23); > > A32 <=3D "01000000110100000000000000000000" ; -- 6.5 > > B32 <=3D to_float(3.23, B32); -- size using B32 > Y32 <=3D A32 + B32 ; > > These are great libraries and very easy to use. The only problem is that t= he > waveform doesn't have an option to display them properly. > > Hanswww.ht-lab.com > > > > > > > Thanks- Hide quoted text - > > - Show quoted text - I appreciate your help. I will try to use the floatfixlib library you mentioned and see how it goes.Article: 128102
Hello! I'm student, planing to work with FPGA for my thesis. Totaly unexperienced. Here I'm wondering about the speed. Don't feel the speed of multipliers, shifting, adding,... My plan is to implement circular Hough transform for images. There are many issues, here I'm wondering about one of them. I'll try to present mathematical problem in short for those who are unfamiliar with Hough transform and ask you what do you think which solution is better, faster,... FPGA with embedded multipliers will be provided for me. I'll propose 2 solutions. Also, you could just tell me how fast are multipliers compared to some other operations, like shifting, adding numbers.... Just to feel this numbers, anything :) I don't need 'ns', even to say this operation is 2x faster than this one... I'll be using 18-bit signals. So if you post just comparision of this operations, it will be very helpful for me. I still don't know to read 'data sheets' that good so... What is the speed of this? 18-bit signed number (signal), multiply (with embedded multiplier), shifting, adding? -= Problem =- It will be simplyfied... I need to compute this: (1) x = R*gx/mag y = R*gy/mag All of this are signed integers fitting in 18 bits. 'R'...constant gx,gy...inputs 'gx' & 'gy' represent vector (gx,gy) in Carthesian system. 'mag'...magnitude of vector (gx,gy) I could rewrite (1) like this: (2) x = R*cos(fi) y = R*sin(fi) 'fi'...angle between vector (gx,gy) and x-axis -= CORDIC =- For CORDIC I like expression (2). Define this: grad = vector (gx,gy) sol = vector (R,0) Using CORDIC I rotate vector 'grad' and 'sol'. They are rotate for the angle of 'fi'. 'grad' is rotated in one direction, 'sol' in the opposite. I don't need to compute 'fi'. Vectors are rotated until vector 'grad' becomes equal to (mag, 0), or just to say until the 2nd component of vector becomes 0. After this rotations, vector 'sol' holds the solution. To be honest, haven't tryed implementing this one. I see that precision rotating angle in iterations can also effect this problem a lot. Anyway, if you can give me any kind of input for this problem, I'll be happy to hear. -= arithmetic expression =- Don't know how to call that. Hope you understand. I use this expression: x = (R*gx)/mag y = (R*gy)/mag 'mag' is computed like this: mag = abs(gx) + abs(gy)...it's precise enough for me I'll be using embedded multipliers to compute 'R*gx', so no problem with that. Problem is divison. For division I'll use gold-smith algorithm. I think it should converge to my solution in 7 iterations, where in each iteration I have to multiply 2 numbers(actualy 2x times, but paralel, so no problem, I look at it like one operation) and then substract a number from one product (something like that). Important part of pseudo-code: paralel: a1 = a1*m; b = b*m; following: m = constant - b; -= help =- I'm unexperienced. I wonder is it worth implementing both of them and comparing them. It could be interesting, but it's not my problem actually. I should implemented anything and my menthor would accept any solution since he's also unexperienced with it. If you think one solution is by far better, can you explain why you think so? If you think they are both good and fast comparing to each other, I'll probably try to implement both of them. You can also propose anything else, if you feel so. There is an idea of distributed arithmetics, but still haven't spend time in learning about this solution.Article: 128103
"Frater" <mfraterBEZ@VELIKIHgmail.SLoVAcom> wrote in message news:MPG.21f6effed65a859c989681@news.amis.hr... > > You can also propose anything else, if you feel so. There is an idea of > distributed arithmetics, but still haven't spend time in learning about > this solution. Here's a link to somewhere for more reading on this type of DSP in FPGAs, and distributed arithmetic. http://andraka.com/dsp.htm HTH., Syms. From pcw@www.karpy.com Tue Jan 15 08:27:09 2008 Path: newsdbm02.news.prodigy.net!newsdst02.news.prodigy.net!prodigy.com!newscon02.news.prodigy.net!prodigy.net!border1.nntp.dca.giganews.com!nntp.giganews.com!local01.nntp.dca.giganews.com!nntp.megapath.net!news.megapath.net.POSTED!not-for-mail NNTP-Posting-Date: Tue, 15 Jan 2008 10:27:14 -0600 From: Peter Wallace <pcw@www.karpy.com> Subject: Re: Debbuging a RISC processor on an FPGA Date: Tue, 15 Jan 2008 08:27:09 -0800 User-Agent: Pan/0.14.2 (This is not a psychotic episode. It's a cleansing moment of clarity.) Message-Id: <pan.2008.01.15.16.27.08.684925@www.karpy.com> Newsgroups: comp.arch.fpga References: <fmfqhm$a86$1@aioe.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Lines: 25 X-Usenet-Provider: http://www.giganews.com NNTP-Posting-Host: 66.80.167.54 X-Trace: sv3-VRm75pLdLA+shpMwNsFB+MOjPbg9MVAQ4IlHMuUiK88TU06odZ+V4IPL6qxXvqPE0X6/eARkmbinJZL!KbKHBWEBFC5IHoLiuPBbtKmmajvXRNtXSuxJ7041xB+RjmVAWW8BIPY38Ayf6WuopKvuoPGUd4Ri!qyfrgf9/OPtV X-Complaints-To: abuse@megapath.net X-DMCA-Complaints-To: abuse@megapath.net X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.37 Bytes: 2310 Xref: prodigy.net comp.arch.fpga:140342 X-Received-Date: Tue, 15 Jan 2008 11:27:15 EST (newsdbm02.news.prodigy.net) On Mon, 14 Jan 2008 14:11:02 +0000, pg4100 wrote: > Hi > > I have implemented a RISC architecure and RTL simulation in Modelsim > works fine. So the next step would be to run this architecture on an > FPGA and see if it still outputs the correct results. So far my only > idea is go use Chipscope to connect to the core and then try to read out > the register contents as soon as the computation of the program has > finished. Until now I just used Chipscope to debug simple design where I > just had debugg one output value and not a set of registers. > > Are their maybe other approaches that I could use to see if the > sythesized core does the same as the simulated one? > > Would be thankful for other ideas > > Thanks! One thing I have done is make a special debug version of the processor, and bring out the interesting bits (pipelines,registers,PC, status) so they are readable by the host (you mentioned you had a PCI card...). Then I have a single clock register that clocks the CPU when written to. This hardware is accessed by a simple disassembler/singlestep program running on the host...Article: 128104
> One thing I have done is make a special debug version of the processor, > and bring out the interesting bits (pipelines,registers,PC, status) so > they are readable by the host (you mentioned you had a PCI card...). Then > I have a single clock register that clocks the CPU when written to. This > hardware is accessed by a simple disassembler/singlestep program running > on the host... That sounds pretty cool. So you make all the interesting parts of the processor visible to the host over this PCI interface, and from the host you can give it the command to proceed one step in the program. WOuld be perfect, have to figure out how this PCI interface works! Thanks for this good idea!Article: 128105
On Jan 15, 7:30=A0am, Frater <mfrater...@VELIKIHgmail.SLoVAcom> wrote: <snip> > I could rewrite (1) like this: > (2) > x =3D R*cos(fi) > y =3D R*sin(fi) <snip> > Don't know how to call that. Hope you understand. I use this expression: > x =3D (R*gx)/mag > y =3D (R*gy)/mag > > 'mag' is computed like this: > mag =3D abs(gx) + abs(gy)...it's precise enough for me <snip> If these are your two choices and the magnitude estimate is "precise enough" for you, perhaps a simple sin/cos lookup would be acceptable, too. Rather than calculate the sin/cos values at each step to 18 bits, perhaps the +/-0.08% of +/- full scale error in a 1024-element quarter- wave lookup would make you happy. By using a 1kx18 table in a dual- port memory, you can look up both the sine and cosine with one physical on-chip memory (assuming 18-kbit rather than 9-kbit or 36- kbit FPGA families). You need a little manipulation to get the quadrants and sin vs cos figured out, but that's all pretty simple and easily pipelined. Embedded multipliers are pretty fast. One multiply takes the same time as 2-4 adds depending on family and placement. Assuming you're talking "shift" in terms of several bits of shift index, a shift of 0-31 would take roughly the same amount of time as the other two situations... roughly. Division is the toughest operation even compared to the square root which you don't seem to be interested in :-). If you know the "fi" angle and don't need to calculate that value from gx and gy, the sin/cos approach using lookup values will probably give you the best result. There was a discussion within the last few days in this newsgroup on sin/cos lookups that provided a couple suggestions - an Atmel link and mention of Xilinx tool capabilities. I hope you have fun with the project! - John_HArticle: 128106
On Jan 15, 6:07 am, sunil <sunnyd...@gmail.com> wrote: > Downloading Bitstream onto the target board > > ********************************************* > impact -batch etc/download.cmd > > Linux ist SuSE 9.2 > Starte impact ... > > Release 8.2.02i - iMPACT I.34 > Copyright (c) 1995-2006 Xilinx, Inc. All rights reserved. > > // *** BATCH CMD : setMode -bs > // *** BATCH CMD : setCable -port auto > AutoDetecting cable. Please wait. > Connecting to cable (Parallel Port - parport0). > > Connecting to cable (Parallel Port - parport1). > Connecting to cable (Parallel Port - parport2). > Connecting to cable (Parallel Port - parport3). > Connecting to cable (Usb Port - USB21). > Checking cable driver. > Overriding Xilinx file <> with local file > </usr/local/eda/xilinx/edk_82i/bin/lin/> > Cable connection failed. > Cable autodetection failed. > > Done! > At Local date and time: Tue Jan 15 14:49:29 2008 > make -f system.make bits started... > make: Nothing to be done for `bits'. > > Done! > At Local date and time: Tue Jan 15 14:49:35 2008 > make -f system.make download started... > > ********************************************* > > Downloading Bitstream onto the target board > > ********************************************* > impact -batch etc/download.cmd > > Linux ist SuSE 9.2 > Starte impact ... > > Release 8.2.02i - iMPACT I.34 > Copyright (c) 1995-2006 Xilinx, Inc. All rights reserved. > > // *** BATCH CMD : setMode -bs > // *** BATCH CMD : setCable -port auto > AutoDetecting cable. Please wait. > Reusing 7806C421 key. > Reusing FC06C421 key. > Connecting to cable (Parallel Port - parport0). > Reusing 7906C421 key. > Reusing FD06C421 key. > Connecting to cable (Parallel Port - parport1). > Reusing 7A06C421 key. > Reusing FE06C421 key. > Connecting to cable (Parallel Port - parport2). > Reusing 7B06C421 key. > Reusing FF06C421 key. > Connecting to cable (Parallel Port - parport3). > Reusing A006C421 key. > Reusing 2406C421 key. > Connecting to cable (Usb Port - USB21). > Checking cable driver. > Overriding Xilinx file <> with local file > </usr/local/eda/xilinx/edk_82i/bin/lin/> > > Cable connection failed. > Reusing B406C421 key. > Reusing 3806C421 key. > Reusing B506C421 key. > Reusing 3906C421 key. > Reusing B606C421 key. > Reusing 3A06C421 key. > Reusing B706C421 key. > Reusing 3B06C421 key. > Cable autodetection failed. > > Done! The above messages indicate that your programming adapter is not connected. May I suggest that in the future you describe your setup, and specify exactly what it is that you are asking for help on? G.Article: 128107
Does Xilinx have any multicore FPGA ?Article: 128108
In article <1e8c5540-f9a2-43bf-b5c8-6bea27b837e8 @x69g2000hsx.googlegroups.com>, newsgroup@johnhandwork.com says... > On Jan 15, 7:30 am, Frater <mfrater...@VELIKIHgmail.SLoVAcom> wrote: > <snip> > > > I could rewrite (1) like this: > > (2) > > x = R*cos(fi) > > y = R*sin(fi) > > If these are your two choices and the magnitude estimate is "precise > enough" for you, perhaps a simple sin/cos lookup would be acceptable, > too. Problem is that 'fi' is unknown. I would have to calculate it. Only inputs I get are: gx gy To calculate 'fi' I could use CORDIC or arc tg(gy/gx), again coming to the same problem. > Embedded multipliers are pretty fast. One multiply takes the same > time as 2-4 adds depending on family and placement. Assuming you're > talking "shift" in terms of several bits of shift index, a shift of > 0-31 would take roughly the same amount of time as the other two > situations... roughly. Division is the toughest operation even > compared to the square root which you don't seem to be interested > in :-). Wau! So multiplying is 2x faster than adding? Or you ment adding 2-bit by 2-bit signals?Article: 128109
Jatin, Xilinx can actually claim it had "multi-core" before Intel coined the phrase: The first Virtex II Pro family had devices with two IBM 405PPC processors in them (winter, 2004). http://www.xilinx.com/publications/xcellonline/xcell_48/xc_pdf/xc_linux48.pdf In fact, the largest device was designed to have four processors, but no one seemed interested in buying a FPGA with four IBM 405PPC's in it then (it wasn't popular, yet). Part of Erich Goetting's legacy may be that he saw multi-core as a solution three years before anyone (in 2001) in addition to all of his many accomplishments. http://www.upenn.edu/gazette/0507/obits.html (see 1980 graduates) And, today, Virtex 4 FX series has devices with two 405PPC cores, as well. For Virtex 5 FX, that announcement will be made soon, but you can probably assume there will continue to be at least two PPC cores in the larger parts in the FX line. Perhaps in future, we may even have more than 2. As well, today, one can place many (as in perhaps dozens) of MicroBlaze(tm) soft processors in our larger parts. Researchers have been using our parts to investigate the issues of "multi-core" again long before Intel and AMD made it so popular. http://www.google.com/search?hl=en&q=multicore+microblaze&btnG=Google+Search (2,002 hits...) At a talk given at SELSE II by an Intel Scientist on "multi-core" where he concludes Intel's future may have 288 cores beyond 22nm, one attendee stood up and asked (it wasn't me, by the way) "sounds like you are describing a course-grain FPGA!" The Intel scientist was incensed, and reacted quite emotionally, as he felt it was an insult to be compared to a lowly and inferior technology "FPGA"! http://selse2.selse.org/presentations/UIUC_SBorkar_April_11_2006.pdf see slide 15. Of course, he completely ignores the communication between cores, and left it as an "exercise for the student" not worthy of his brilliance and attention. Not to mention there are no tools and no languages that allows multiple processors to all work at the same time (unless you count VHDL and verilog, which FPGAs use already). So, Xilinx has "been there, done that" and supports any and all efforts to make programming massively parallel structures more efficient (as we are the premier vendor of massively parallel programmable devices today). AustinArticle: 128110
On Jan 15, 10:20=A0am, Frater <mfrater...@VELIKIHgmail.SLoVAcom> wrote: > In article <1e8c5540-f9a2-43bf-b5c8-6bea27b837e8 > @x69g2000hsx.googlegroups.com>, newsgr...@johnhandwork.com says... > > > On Jan 15, 7:30=A0am, Frater <mfrater...@VELIKIHgmail.SLoVAcom> wrote: > > <snip> > > > > I could rewrite (1) like this: > > > (2) > > > x =3D R*cos(fi) > > > y =3D R*sin(fi) > > > If these are your two choices and the magnitude estimate is "precise > > enough" for you, perhaps a simple sin/cos lookup would be acceptable, > > too. > > Problem is that 'fi' is unknown. I would have to calculate it. Only > inputs I get are: > gx > gy > > To calculate 'fi' I could use CORDIC or arc tg(gy/gx), again coming to > the same problem. > > > Embedded multipliers are pretty fast. =A0One multiply takes the same > > time as 2-4 adds depending on family and placement. =A0Assuming you're > > talking "shift" in terms of several bits of shift index, a shift of > > 0-31 would take roughly the same amount of time as the other two > > situations... roughly. =A0Division is the toughest operation even > > compared to the square root which you don't seem to be interested > > in :-). > > Wau! > So multiplying is 2x faster than adding? > Or you ment adding 2-bit by 2-bit signals? Adders are typically implemented with acceleration hardware in the form of "carry chains" such that adding two 4-bit values is almost the same speed as adding two 16-bit values. Almost. You can perform 2-4 add operations ((((A+B)+C)+D)+E) in the time it takes a multiply to complete. Looking at the Virtex-5 data sheet, for instance, the timing data found in http://www.xilinx.com/support/documentation/data_sheets/ds202.pdf shows performance information in table 27 with a 450 MHz 16-bit adder in the slower device. The *pipelined* multiplier in the DSP48E block is also shown as 450 MHz. A later table for the DSP48E shows a 2- register multiply as 275 MHz "without MREG." It's all got to do with pipelining. The 450 MHz *may* be a somewhat arbitrary limit because the larger V5 devices are limited to 450 MHz global clocks. It might be fun to experiment with different approaches. Nice thing about Cordic is the free multiply and the clean pipelining. Check out the timing data sheet for the device you're going to use to see if you can get a better feel for some things. People who have been around FPGAs long enough can go to the detailed timing characteristics to estimate delays. Happily, there's information that people new to the discipline can use as well. - John_H Different families will give slightly different relative scales.Article: 128111
Seems to work fine when char_val is 8 bits. > char_slv <= std_logic_vector(to_unsigned(char_pos, char_slv'length));Article: 128112
On Jan 15, 1:50 pm, austin <aus...@xilinx.com> wrote: > Jatin, > > Xilinx can actually claim it had "multi-core" before Intel coined the > phrase: The first Virtex II Pro family had devices with two IBM 405PPC > processors in them (winter, 2004). > > http://www.xilinx.com/publications/xcellonline/xcell_48/xc_pdf/xc_lin... > > In fact, the largest device was designed to have four processors, but no > one seemed interested in buying a FPGA with four IBM 405PPC's in it then > (it wasn't popular, yet). > > Part of Erich Goetting's legacy may be that he saw multi-core as a > solution three years before anyone (in 2001) in addition to all of his > many accomplishments. > > http://www.upenn.edu/gazette/0507/obits.html > (see 1980 graduates) > > And, today, Virtex 4 FX series has devices with two 405PPC cores, as well. > > For Virtex 5 FX, that announcement will be made soon, but you can > probably assume there will continue to be at least two PPC cores in the > larger parts in the FX line. > > Perhaps in future, we may even have more than 2. > > As well, today, one can place many (as in perhaps dozens) of > MicroBlaze(tm) soft processors in our larger parts. Researchers have > been using our parts to investigate the issues of "multi-core" again > long before Intel and AMD made it so popular. > > http://www.google.com/search?hl=en&q=multicore+microblaze&btnG=Google... > (2,002 hits...) > > At a talk given at SELSE II by an Intel Scientist on "multi-core" where > he concludes Intel's future may have 288 cores beyond 22nm, one attendee > stood up and asked (it wasn't me, by the way) "sounds like you are > describing a course-grain FPGA!" The Intel scientist was incensed, and > reacted quite emotionally, as he felt it was an insult to be compared to > a lowly and inferior technology "FPGA"! > > http://selse2.selse.org/presentations/UIUC_SBorkar_April_11_2006.pdf > > see slide 15. > > Of course, he completely ignores the communication between cores, and > left it as an "exercise for the student" not worthy of his brilliance > and attention. > > Not to mention there are no tools and no languages that allows multiple > processors to all work at the same time (unless you count VHDL and > verilog, which FPGAs use already). > > So, Xilinx has "been there, done that" and supports any and all efforts > to make programming massively parallel structures more efficient (as we > are the premier vendor of massively parallel programmable devices today). > > Austin I think that the many people doing parallel processing since the Transputer days would argue that there are many "tools and languages that allow multiple processors to all work at the same time", although you might have to look very hard to find these running in a PC... Here's one I worked on back in the 1980's: http://www.flavors.com/HTML%20Presentation%20folder/index.htmArticle: 128113
Hey all, I need to input 2 signals into my spartan development board. These will be 5V(+/-10%) signals which I am running through 500 ohm series resistors for protection per http://www.xilinx.com/products/spartan3e/sp3e_power.pdf. My question is, where can I safely connect the inputs on the development board? Just as added info, the design is a homebrew ballistic chronograph for paintballs. I'm using a 2 IRLED gate system at a fixed separation -- the signals represent the gate outputs obviously. I only need a frequency of about 75microseconds. thanks, JayArticle: 128114
On Jan 15, 2:46=A0pm, shadfc <jay.winein...@gmail.com> wrote: > Hey all, > =A0 =A0I need to input 2 signals into my spartan development board. =A0The= se > will be 5V(+/-10%) signals which I am running through 500 ohm series > resistors for protection perhttp://www.xilinx.com/products/spartan3e/sp3e_= power.pdf. > My question is, where can I safely connect the inputs on the > development board? > =A0 Just as added info, the design is a homebrew ballistic chronograph > for paintballs. =A0I'm using a 2 IRLED gate system at a fixed separation > -- the signals represent the gate outputs obviously. I only need a > frequency of about 75microseconds. > > thanks, > Jay Most inputs are "safe" but using one of the side connectors would be "convenient." J1, J2, and J4 are on the bottom right of the board and are powered by 3.3V Vcco rails so those should work fine for you. There are many options, those are just rather convenient. Have you looked at the starter board's user guide? (UG230) - John_HArticle: 128115
On Jan 15, 6:04 pm, John_H <newsgr...@johnhandwork.com> wrote: > On Jan 15, 2:46 pm, shadfc <jay.winein...@gmail.com> wrote: > > > Hey all, > > I need to input 2 signals into my spartan development board. These > > will be 5V(+/-10%) signals which I am running through 500 ohm series > > resistors for protection perhttp://www.xilinx.com/products/spartan3e/sp3e_power.pdf. > > My question is, where can I safely connect the inputs on the > > development board? > > Just as added info, the design is a homebrew ballistic chronograph > > for paintballs. I'm using a 2 IRLED gate system at a fixed separation > > -- the signals represent the gate outputs obviously. I only need a > > frequency of about 75microseconds. > > > thanks, > > Jay > > Most inputs are "safe" but using one of the side connectors would be > "convenient." J1, J2, and J4 are on the bottom right of the board and > are powered by 3.3V Vcco rails so those should work fine for you. > There are many options, those are just rather convenient. > > Have you looked at the starter board's user guide? (UG230) > - John_H Hi John, Yeah, I've read the User Guide and I tied my design to two of the pins in J4 initially before I posted here. However the placer gave me the following errors during synthesis: ERROR:Place:311 - The IOB gateB is locked to site PAD20 in bank 0. This violates the SelectIO banking rules. Other incompatible IOBs may be locked to the same bank, or this IOB may be illegally locked to a Vref site. ERROR:Place:311 - The IOB gateA is locked to site PAD16 in bank 0. This violates the SelectIO banking rules. Other incompatible IOBs may be locked to the same bank, or this IOB may be illegally locked to a Vref site. I had the pins D7 and E8 as LVCMOS33 when I got those errors. I just changed it to LVTTL and the placer completed. I am not really sure why -- you're welcome to explain if you want. I'll be the first to admit that I dont understand a ton about the internals of this stuff. JayArticle: 128116
pg4100 wrote: > Thanks for your feedback Goeran, the simlation in the Modelsim works > fine. I have a PCI card, so the LEDS approach sounds good but I have > already written some kind of libraray with different applications and I > wanna see if they also work on the FPGA and not only in the simulation. > > So I was thinking of having the program counter as my trigger, so when > the end is reached then I wanna somehow read out the values of the > register with Chipscope. Or each time the program counter changes I read > out the values of the Registers to see if the results are as expected > and could so also immediately identify the instructions that have caused > problems. SO far i have just debugged a simple counter with the > chipscope so I hope the approach with the RISC also works as I have just > described. Since you have simulation working, and thus expect to be looking for subtle failures (or even testing over-clocking..) what about using a compare array scheme. Most FPGA's have RAM 'for free', so use that thus: Code looks like this FOR ALL Tests Ans = FuncCall(tests); IF Ans <> ArrayCompare[tests] THEN ArrayCompare[tests] := Ans; INC(ErrorCounter); ENDIF On SIM, you clear ArrayCompare[xx], and so should fail and load, on all calls, on the first pass, and on the second pass, fail on none. You then copy that Answer ArrayCompare[] from SIM, and download to the FPGA, and run, which becomes the same as the second pass, and you should have zero compare failures.. -jgArticle: 128117
On 2008-01-15, Frater <mfraterBEZ@VELIKIHgmail.SLoVAcom> wrote: > > I'm student, planing to work with FPGA for my thesis. Totaly > unexperienced. Here I'm wondering about the speed. Don't feel the speed > of multipliers, shifting, adding,... Your question is formed in a way that is leading people to tell you the *latency* of a multiply vs an add. If you present your arguments to a combinatoral adder or a multiply block, after some small amount of time the output will be stable and you could clock it into a result register. That is not usually your utmost concern. Even if you have a multiplier which is clocked and takes several cycles to complete vs an adder which completes within one cycle, the multiplier will likely accept input and produce output on every single cycle. It has much more latency than the adder, but it can have equal throughput. Also, in FPGAs you can almost always trade off area (how much of the FPGA is used) with speed. You are not dealing with a fixed set of operations that take a fixed amount of time, like a CPU. You can just plain have more operations if that's what it takes. In fact, if you use a "wizard" to produce a multiplier you get to choose how fast it works. If you do this in Altera's megafunction generator it will update the resources required as you make the changes. -- Ben Jackson AD7GD <ben@ben.com> http://www.ben.com/Article: 128118
On Jan 14, 12:38 pm, Siva Velusamy <siva.velus...@xilinx.com> wrote: > ratemonotonic wrote: > > Hi all , > > > I am new to xilinx tool and currently I am working with a project > > constructed using EDK 7.1. I am using EDK 9.2. > > > In this old project Xilnet is used for a webserver demo talking to an > > external MAC/PHY chip (SMSC 91C111). > > > Where can I get this library ? > > xilnet was deprecated for the last few releases. It is still there in > 9.2, but will go away in 10.1. > > The preferred solution is to use lwIP. > > /Siva lwip is not an adequate replacement for xilnet in all cases. First, lwip requires some form of OS kernel and timers, while xilnet can be used in stand-alone applications. As a result, xilnet is more suitable for smaller projects where code space is a constraint. However, reading the license for the code, it appears that you can keep using it even though it is deprecated, you will just need to turn it into a user library. I'm working on several projects on ML boards where we simply don't have enough code space left for lwip and xilkernel, so this is the approach we are taking.Article: 128119
I m trying Virtex-4 Embedded Tri-Mode Ethernet MAC Wrapper v4.4. ON tri-mode 1000BASE-T /100BASE-Tx is success on TCP/IP commnunication. BUT 10BASE-T is bad communication. Only one ping send , but many ping or ping reply is continued. Does anyone know this trouble?Article: 128120
Hi all, I'm a new guy in fpga. I need to implement a GMSK in altera fpga. I plan to design a gaussian filter but I don't know how to do it. Can i implement the gaussian filter by using FIR filter (with the function of dsp builder)? Has anyone done this before? If so, do you have any method to complete this task? I am really struggling! I have only just started to learn the FPGA. Thank you. DickArticle: 128121
On 2008-01-15, Frater <mfraterBEZ@VELIKIHgmail.SLoVAcom> wrote: > I could rewrite (1) like this: > (2) > x = R*cos(fi) > y = R*sin(fi) How many values can fi take? For example, if you only need 1024 values between 0 and 2*pi you can just use blockrams in the FPGA to calculate cos(fi) and sin(fi) (e.g. you can fit 1024 18-bit values into a single blockram in a Virtex-2) If you should do this or not depends on if you are constrained by the number of logic blocks or the number of memory blocks in the FPGA. If you are constrained by neither, go with the lookup-table since that will be far easier :) /AndreasArticle: 128122
Mike Treseler <mike_treseler@comcast.net> writes: > -jg wrote: > >> ... I am forced to use Google groups. A klunkier interface than what >> I am used to and it does >> obfuscate the poster ID more than my usual news reader. > > My favorite news service is > http://news.individual.net/ > It works well at home or through firewalls. > It's 10 euro a *year*, and well worth it > for the trouble-free service from anywhere. Seconded! Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 128123
Thank you! You're right! The 2nd assignment overrode the 1st one. Dwayne Dilbeck wrote: > In this case the VHDL is working the same way a piece of Verilog code would. > The signal does change just as it would in verilog but the value change > occurs in 0 time. You are quite right that the previous code needs a state > machine controling it inoder to have data_buffer<="0000_0000_0000_0000_1111" > written. > "Ben Jackson" <ben@ben.com> wrote in message > news:slrnfom4gj.25b0.ben@saturn.home.ben.com... >> On 2008-01-14, Nick <tklau@cuhk.edu.hk> wrote: >>> data_buffer <= "00000000000001111"; --A >>> sram_io <= data_buffer; >>> data_buffer <= "00000000000000000101"; --B >> I don't speak VHDL, but in Verilog if you wrote something like that >> in a process that happened on every clock, the first assignment of >> data_buffer would be overridden by the second. If you want a series >> of steps (outside of simulation) you're going to have to do it with >> some kind of counter or state machine. >> >> Eg: >> >> data_buffer <= phase_a ? 20'b1111 : 20'b0101; >> >> (in Verilog, but you get the idea) >> >> -- >> Ben Jackson AD7GD >> <ben@ben.com> >> http://www.ben.com/ > >Article: 128124
I do not think that you need multipliers at all. For a circular hough transform you process each point of your input (possibly preprocessed for gradients, whatever) and draw a circle in an output histogram. Using Bresenhams circle algorithm you can compute the coordinates for the target pixels at a cost of two additions per pixel. No angles, multiplication, squares, roots, etc. involved. Not only is bresenhams algorithm faster than your proposal it also avoids lots of issues related to rounding, error accumulation, etc. Bresenhams circles are perfect, sweeping phi as you suggest in your second formulation will look really ugly. As you are drawing many, many circules of the same radius you could also precompute a list of coordinates into a BRAM by a CPU and then process that list for all input pixels in hardware . This would provide you with fast hough transform hardware for arbitrary shapes up to a certain size. Kolja Sulimma cronologic ohg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z