Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
http://www.digilentinc.com/ AustinArticle: 114376
glen herrmannsfeldt wrote: > Colin Hankins wrote: >> The FCS of the ethernet packet is the only portion that is sent most >> significant bit first. Thus there is no need to "switch" it. > That isn't quite true, as an FCS isn't a number, it is a bit string. > That is, it doesn't have place values. All bits are equally > significant in the result. (Being only tested for equality.) Yes. The point is that one bit order works and the opposite does not. -- Mike TreselerArticle: 114377
Andrew Rogers wrote: > Found it at: > > http://sourceforge.net/projects/xc3sprog/ > > Great to see so many changes added to it. Once I worked out how to drive > sourceforge.net, ie. to upload changes, etc. I have a few updates myself. That should be easy; just install subversion and you should be done with it. Let us ping Eric Jonas, the owner of the projetc; my understanding is he's kind of busy and is hard to have commits added to the sourceforge project. There are few other peoples I saw looging to post updates for xc3sprog, let us ping them too. I hope I'll see some activity soon! -- mmihaiArticle: 114378
glen herrmannsfeldt wrote: > I am pretty sure that the design allows a simple LFSR with the > transmitted bits as input, and then shifted out at the appropriate > time. I believe, but am slightly less sure, that it also works > such that a properly received frame will generate a constant value > in the same LFSR when run through the data and FCS of the incoming > frame. Both statements are true. > Ethernet was designed in the days when logic was much more expensive > than today. Simplifying the required logic was important. Yes and this is also the source of much confusion about the spec. The recommended LFSR circuit in the appendix does not implement the mathematical description in the text directly, even though the result is the same. -- Mike TreselerArticle: 114379
On Fri, 12 Jan 2007 12:05:02 GMT, Andrew Rogers <andrew@_NO_SPAM_rogerstech.co.uk> wrote: >Hi, > >It's been over two years since I released xc3sprog. Since then Xilinx >has released ISEWebPack for GNU/Linux. Many people have sent me >modifications for various additions to xc3sprog. > >A lot has happenned to me in two years. I moved house (the Xilinx kit is >still in the loft). Have completeted my Ph.D. except the thesis which >seems to take for ever. > >What I would like to do is to incorporate the many suggestions and >patches into the next release of xc3sprog if people are still using it. > >I would be particularly interested in modifications to xc3sprog for USB >programmers. My laptop does not have a parallel port:( > >Regards >Andrew Rogers Now *this* is cool! http://www.rogerstech.co.uk/DSP/Applet.html If you work on this any more, step/impulse response would be great, too. JohnArticle: 114380
On 13 Jan 2007 07:04:45 -0800, NickHolby@googlemail.com wrote: >Hi all, > >I'm an electronics hobbyist; my projects are generally quite small. >They generally consist of me using 7400 chips, and in some cases the >PIC microcontroller. However, I'm wanting to stretch out a little. What >I'm after is nothing more than a chip that contains thousands of 7400 >chips, no clock, no onboard memory etc... Just a microchip which is in >essence thousands of 7400 chips. I would like actual control of the >connections between the chips, so I can say what gates link with what >without me doing something so general as 1+1 and the programmer >figuring out what to put. > >I thought FPGAs might be suitable, but they all appear to try to offer >more than just something like that. Can anybody point me in some rough >direction? > >Thanks for your time, >Nick FPGA. Get one of the Xilinx development boards and download the WebPack free software. Expect a decently nasty learning curve, but then you'll be able to do amazing stuff. Need a dozen 32-bit up/down counters? Twenty 16x16 multipliers? No sweat. JohnArticle: 114381
On 2007-01-13, NickHolby@googlemail.com <NickHolby@googlemail.com> wrote: > Just a microchip which is in > essence thousands of 7400 chips. I would like actual control of the > connections between the chips, so I can say what gates link with what > without me doing something so general as 1+1 and the programmer > figuring out what to put. The Xilinx WebPack includes a schematic capture input that allows you to use 7400 chips to represent your logic. You can play with it without buying any hardware. As for hardware, a CPLD is tens to hundreds (in a few cases thousands) of flops which can be connected with quite a bit (equivalent of hundreds or thousands of 7400 series chips) of combinatoral logic. They're generally flash programmable and available in human-solderable packages. An FPGA is thousands to tens or hundreds of thousands of flops and connecting logic. They require some kind of external programming (like a flash eeprom) and the larger parts are only available in fine pitch or BGA type packages. -- Ben Jackson AD7GD <ben@ben.com> http://www.ben.com/Article: 114382
1) Is the V4 IDELAY resolution 75ps like the switching document says or the 78.125ps mentioned in the forum? 78.125 makes sense since it is (1/200MHz)/64. And where does the tap error come from? What is the correct number for THAT? I also saw that the recently updated data sheet says that the per tap resolution is actually an average across the whole 64 tap line. And it is suggested to use at least 5 taps. What is the reason for this??? 2) I am trying to figure out the best way to characterize some data coming into the FPGA. I have 4 pairs of LVDS data coming in to input pins. Eventually these pairs (which are from a 1-to-10 LVDS distribution chip - so they are copies of the same data) will be delayed, using the IDELAY, 45 degrees relative to eachother. I will have a 0 phase, 45 phase, 90 phase, and 135 phase data input. Using the DDR ISERDES at 1:8 deserialization, I will have 8 phases of data. The reason I am doing this is to recover the data. I know it is at 312 MHz, but I don't have a clock reference so I sample 8 phases of the same data and choose which one is the best (how is not important for this discussion). I know that before I can delay the data pairs I must first '0 them out'. Due to board layout, clock skew to the ISERDES, etc, the data pairs (when ALL set to IDELAY = 0) will not be perfectly aligned. Currently, I am using the 312 MHz clock that I clock the ISERDES with to drive, via the ODDR method, an external data generator that sends one byte of data. I'm doing this so that I have a known data-to-clock relationship. I have the byte outputs of the ISERDES connected to ChipScope. I monitor the individual channels and one-at-a-time increment the IDELAY until the data corrupts. I know that the data is now being sampled right at the clock edge -- or at least close enough to break setup. I do this for each channel and I end up with 4 different IDELAY values. I then normalize these delays to the smallest one and I have the offset for each data pair. These offsets should align the data (within error and tolerance) to each other. I can then add the phase delays to the data pairs. This worked fine when I was using the ISE and could use the ChipScope core inserter. Now I've moved to the EDK and have to use the ChipScope IP. I don't know if that has anything to do with anything though! My problem is that I can find the offsets to make the data align relative to each other, but the phases DO NOT work all the time. For example I can see that 4 phases 'see' the data, one doesn't and then the next one does. If everything is aligned correctly, I should not get any skips in detected phase. They should all be consecutive. Even worse, sometimes all 8 phases 'see' the data. This case is no good since there is no way to determine which one saw it first and therfore which one to use to forward data. There should be at LEAST one phase that does NOT detect the data!! Sometimes only one phase 'sees' the data. This should never happen either. The data is VERY clean coming from the test equipment and the distribution chip. I would say the eye opening is about 90% or more. Can anyone see any major flaws with what I am doing? Is there a better way to guarentee these phase offsets? There is no way to get absolutely precise timing since the 45 degree increments of phase are not divisible by the IDELAY tap resolution. So I automatically have some error, but even with this error, this little scheme should work. Hell, I've seen it work consistently!! Hopefully I didn't just get lucky. Sorry for the long post!!!Article: 114383
NickHo...@googlemail.com wrote: > Hi all, > > I'm an electronics hobbyist; my projects are generally quite small. > They generally consist of me using 7400 chips, and in some cases the > PIC microcontroller. However, I'm wanting to stretch out a little. What > I'm after is nothing more than a chip that contains thousands of 7400 > chips, no clock, no onboard memory etc... Just a microchip which is in > essence thousands of 7400 chips. I would like actual control of the > connections between the chips, so I can say what gates link with what > without me doing something so general as 1+1 and the programmer > figuring out what to put. > > I thought FPGAs might be suitable, but they all appear to try to offer > more than just something like that. Can anybody point me in some rough > direction? > > Thanks for your time, > Nick FPGAs will do this, as will CPLDs as others have suggested. There are also FLASH FPGAs, and CPLDs wih FPGA fabric, like MaxII and MachXO, so there are plenty of HW choices. Your real challenge is deciding just how you are most comfortable doing this bit: "I would like actual control of the connections between the chips" - entry schemes range from Schematic, to Boolean Eqn (Abel/CUPL/WinPLACE)s, to HDLs like Verilog orVHDL. Some CPLD vendors are Actel, Altera, Atmel, ICT(anachip), Cypress, Lattice, Xilinx What is your supply voltage range ? - that will filter significantly the PLD candidates -jgArticle: 114384
Mike Treseler wrote: > Weng Tianxiang wrote: > > > When I started debugging the design, I found all single port > > distributed ram were not initialized. All their outputs are 'X'. > > That is normal for a RAM model. > To make a RAM output valid the testbench > must perform a write and read cycle to the > same address. > Maybe a ROM is what you want. > > -- Mike Treseler Hi Mike, I disagree with your opinion this time. The reason is there is a *.mif file included in the *.vhd file to be used in initial file for simulation. Thank you. WengArticle: 114385
motty wrote: > Can anyone see any major flaws with what I am doing? How do you know that the phase difference is constant? -- Mike TreselerArticle: 114386
> How do you know that the phase difference is constant? Which phases are you asking about? The channel to channel difference? I would hope that given a particular build that the phase difference between them would be constant (within a reasonable amount, that is, within an IDELAY tap resolution). The phase difference will be due to board layout, clock skew at the ISERDES blocks and other routing through the IO pads. At least that is what I figure. Again, once characterized, these phase differences should remain somewhat constant. Thanks!Article: 114387
Motty, let me answer your fist question: Why are the individual taps not exactly equal? That's because they are 64 concatenated buffers, and no circuit instant is exactly the same as its neighbor, especially at sub-100 nm technology. The sum delay over the 64 buffers is kept dead-accurate with the help of a 200 MHz servo circuit. If you change thet 5 ns period (within reason, say 10%), all tap delays will follow poportionally. Peter Alfke, Xilinx On Jan 13, 2:37 pm, "motty" <mottobla...@yahoo.com> wrote: > 1) Is the V4 IDELAY resolution 75ps like the switching document says > or the 78.125ps mentioned in the forum? 78.125 makes sense since it is > (1/200MHz)/64. And where does the tap error come from? What is the > correct number for THAT? I also saw that the recently updated data > sheet says that the per tap resolution is actually an average across > the whole 64 tap line. And it is suggested to use at least 5 taps. > What is the reason for this??? > > 2) I am trying to figure out the best way to characterize some data > coming into the FPGA. I have 4 pairs of LVDS data coming in to input > pins. Eventually these pairs (which are from a 1-to-10 LVDS > distribution chip - so they are copies of the same data) will be > delayed, using the IDELAY, 45 degrees relative to eachother. I will > have a 0 phase, 45 phase, 90 phase, and 135 phase data input. Using > the DDR ISERDES at 1:8 deserialization, I will have 8 phases of data. > The reason I am doing this is to recover the data. I know it is at 312 > MHz, but I don't have a clock reference so I sample 8 phases of the > same data and choose which one is the best (how is not important for > this discussion). > > I know that before I can delay the data pairs I must first '0 them > out'. Due to board layout, clock skew to the ISERDES, etc, the data > pairs (when ALL set to IDELAY = 0) will not be perfectly aligned. > Currently, I am using the 312 MHz clock that I clock the ISERDES with > to drive, via the ODDR method, an external data generator that sends > one byte of data. I'm doing this so that I have a known data-to-clock > relationship. I have the byte outputs of the ISERDES connected to > ChipScope. I monitor the individual channels and one-at-a-time > increment the IDELAY until the data corrupts. I know that the data is > now being sampled right at the clock edge -- or at least close enough > to break setup. I do this for each channel and I end up with 4 > different IDELAY values. I then normalize these delays to the smallest > one and I have the offset for each data pair. These offsets should > align the data (within error and tolerance) to each other. I can then > add the phase delays to the data pairs. > > This worked fine when I was using the ISE and could use the ChipScope > core inserter. Now I've moved to the EDK and have to use the ChipScope > IP. I don't know if that has anything to do with anything though! My > problem is that I can find the offsets to make the data align relative > to each other, but the phases DO NOT work all the time. For example I > can see that 4 phases 'see' the data, one doesn't and then the next one > does. If everything is aligned correctly, I should not get any skips > in detected phase. They should all be consecutive. Even worse, > sometimes all 8 phases 'see' the data. This case is no good since > there is no way to determine which one saw it first and therfore which > one to use to forward data. There should be at LEAST one phase that > does NOT detect the data!! Sometimes only one phase 'sees' the data. > This should never happen either. The data is VERY clean coming from > the test equipment and the distribution chip. I would say the eye > opening is about 90% or more. > > Can anyone see any major flaws with what I am doing? Is there a better > way to guarentee these phase offsets? There is no way to get > absolutely precise timing since the 45 degree increments of phase are > not divisible by the IDELAY tap resolution. So I automatically have > some error, but even with this error, this little scheme should work. > Hell, I've seen it work consistently!! Hopefully I didn't just get > lucky. > > Sorry for the long post!!!Article: 114388
Phil Hays wrote about a one's complement adder, which uses end-around carry: > It looks to me like there is a chance of a pulse running around the carry > loop for a multiple times, perhaps even forever. There is no way that can happen if the inputs are stable. For an n-bit one's complement adder, there can't be a carry propogation wider than n bits. In other words, in a 16-bit adder, a carry propogation might start in position 5, but even if it wraps it cannot cause carry propogation past position 4. You can prove this by exhaustion for a short word width, and then by induction for longer word widths. The worst-case add time for an n-bit one's complement ripple carry adder is exactly the same as the worst-case for an n-bit two's complement ripple carry adder. This is why there wasn't a problem with using one's complement in various early computers, including the early IBM mainframes, DEC PDP-1, and CDC 6600. > The case of interest is a calculation that should result in an answer of > zero. Note that there are two representations of zero in one's complement > notation, all '1's and all '0's, often called negative zero and positive > zero. > > If there is a carry, the answer is all '0's, or positive zero. If there is > no carry, the answer will be all '1's, or negative zero. If you're suggesting that there is a case in which the end-around carry won't produce the numerically correct result (assuming that the numerical values of "positive zero" and "negative zero" are equal), you are mistaken. > Now suppose my adder is half in the state with a carry, and half in the > state without a carry. The carry, and not carry will both propagate up the > carry chain to the msb and around to the lsb again, chasing each other, > and it is a race. There is no combination of operands for which that can happen. For the sum to be zero (whether represented as "positive zero" or "negative zero", the operands must be n and -n. But if that is true, then the bit patterns representing the operands are bitwise-complementary, so there will be no carries at all. For example, suppose you add 3 and -3 with a four-bit one's complement adder: 0011 1100 ---- 1111 There is no way for any carry to occur in such a case; the result is "negative zero". Similarly for adding "positive zero" and "negative zero". If you add "positive zero" to itself, there are no carries, and the result is "positive zero". If you add "negative zero" to itself, you get: 1111 1111 ---- 11110 <- before end-around carry ---- 1111 <- after end-around carry Thus the final result is "negative zero". > In other words, this circuit has a metastable state when the answer is > zero. If you still think this is possible, please give an example of operands for which it could occur. EricArticle: 114389
Thanks alot for your responses they've been quite helpful! It seems FPGAs or CLPDs are the way forward. Much appreciated! :-) NickArticle: 114390
Hi, > While true, the obvious question would be "Why rewrite something?" I think in many cases the direct idea is to use a counter. When there are not many branches, code in counter style may be more concise because we need not to explicitly describe the next state for each state. > Obviously you don't have a clue about what constitutes a 'state'. A counter > certainly does have a state...the value of the counter at a particular time. You are right. But the synthesis tool does not regard a counter as a state machine, so conventionally we separate these two styles. Thanks.Article: 114391
Ben Jackson <ben@ben.com> wrote: >On 2007-01-13, NickHolby@googlemail.com <NickHolby@googlemail.com> wrote: >> Just a microchip which is in >> essence thousands of 7400 chips. I would like actual control of the >> connections between the chips, so I can say what gates link with what >> without me doing something so general as 1+1 and the programmer >> figuring out what to put. > >The Xilinx WebPack includes a schematic capture input that allows you to >use 7400 chips to represent your logic. You can play with it without >buying any hardware. This is a dangerous advice. 7400 logic is often used in an asynchronous way. Building a flip-flip out of NOR/NAND gates will work with 7400 series logic, but in an FPGA it most probably won't. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 114392
Nico Coesel wrote: > This is a dangerous advice. 7400 logic is often used in an > asynchronous way. Building a flip-flip out of NOR/NAND gates will work > with 7400 series logic, but in an FPGA it most probably won't. > > -- > Reply to nico@nctdevpuntnl (punt=.) > Bedrijven en winkels vindt U op www.adresboekje.nl Does that still stand with CPLDs? NickArticle: 114393
<NickHolby@googlemail.com> wrote in message news:1168783597.143713.98180@v45g2000cwv.googlegroups.com... > Nico Coesel wrote: >> This is a dangerous advice. 7400 logic is often used in an >> asynchronous way. Building a flip-flip out of NOR/NAND gates will work >> with 7400 series logic, but in an FPGA it most probably won't. >> >> -- >> Reply to nico@nctdevpuntnl (punt=.) >> Bedrijven en winkels vindt U op www.adresboekje.nl > > Does that still stand with CPLDs? > > Nick > Yes it could, dependent on construction and how the synthesis is forced - but then you would use a 7474 - wouldn't you? Cross coupled nand set-resets work fine for both Altera FPGA and CPLD's. IckyArticle: 114394
On Sun, 14 Jan 2007 14:26:57 -0000, "Icky Thwacket" <it@it.it> wrote: >Cross coupled nand set-resets work fine for both Altera FPGA and CPLD's. <loud ringing of alarm bells> Indeed they do; but WHERE are you intending to use one of them? If the set and reset signals you're feeding into them are synchronous (i.e. they are flip-flop outputs) then it makes at least as much sense to implement the SR in a synchronous style. No need for that combinational feedback loop, which will cause (at the very least) warning messages from the timing analyzer. If the set and reset signals are asynchronous, then they may suffer decoding glitches that could give undesirable tripping of your S-R latch if you're unlucky (and, as we all know, your luck generally runs out about two days after you've shipped the product to the most important customer). So you probably need to resynchronise those signals anyway, which means that you could just as easily have used a fully synchronous S-R implementation. So... in almost any realistic situation, the cross-coupled SR latch is either unnecessary or undesirable. It is an example of an asynchronous state machine - admittedly a fairly simple example, and one that's relatively easy to get right - and, like all asynchronous state machines, it requires special techniques to use it properly and to avoid tools making unacceptable optimisations to it. Flame away, I'm sure there are some interesting contrary points of view! -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 114395
On 11 Jan 2007 03:32:57 -0800, "John Adair" <g1@enterpoint.co.uk> wrote: >Not set in stone yet but one off prices are likely to be GBP£25-30, >US$ 40-50 on current exchange, for the base level versions with the >XC3S100E. If it starts to ship in reasonable numbers it is likely we >will drop that price back a bit. Certainly discounts will be there for >orders of 10+ even at the initial production run rate. I'm no longer in an academic role, but it seems to me that you have there something that will be very, very desirable as a component of many undergraduate projects. Some silly questions: (1) is there room on the (back of???) the board for a config ROM? (2) is there room on the back for a modest-sized SDRAM? Obviously, neither need be populated on low-cost versions. Nice product. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 114396
"Jonathan Bromley" <jonathan.bromley@MYCOMPANY.com> wrote in message news:fkgkq2tq92j1ge7rtrj877laehcru2vp9g@4ax.com... > On Sun, 14 Jan 2007 14:26:57 -0000, "Icky Thwacket" > <it@it.it> wrote: > >>Cross coupled nand set-resets work fine for both Altera FPGA and CPLD's. > > <loud ringing of alarm bells> > > Indeed they do; but WHERE are you intending to use one of them? > > If the set and reset signals you're feeding into them are synchronous > (i.e. they are flip-flop outputs) then it makes at least as much > sense to implement the SR in a synchronous style. No need for > that combinational feedback loop, which will cause (at the very least) > warning messages from the timing analyzer. > > If the set and reset signals are asynchronous, then they may > suffer decoding glitches that could give undesirable tripping of > your S-R latch if you're unlucky (and, as we all know, your luck > generally runs out about two days after you've shipped the > product to the most important customer). So you probably > need to resynchronise those signals anyway, which means that > you could just as easily have used a fully synchronous S-R > implementation. > > So... in almost any realistic situation, the cross-coupled SR latch > is either unnecessary or undesirable. It is an example of an > asynchronous state machine - admittedly a fairly simple example, > and one that's relatively easy to get right - and, like all > asynchronous state machines, it requires special techniques > to use it properly and to avoid tools making unacceptable > optimisations to it. > > Flame away, I'm sure there are some interesting contrary > points of view! Streuth! Actually the only place I WOULD use it is for an asynchronous front end interface to a switch, i.e. used as a PROPER switch debouncer with a SPCO switch. I was using it as an illustration! Icky From invalid@dont.spam Sun Jan 14 10:00:14 2007 Path: newsdbm05.news.prodigy.net!newsdst01.news.prodigy.net!prodigy.com!newscon04.news.prodigy.net!prodigy.net!newsfeed.telusplanet.net!newsfeed2.telusplanet.net!newsfeed.telus.net!cycny01.gnilink.net!spamkiller2.gnilink.net!gnilink.net!trndny02.POSTED!933f7776!not-for-mail From: Phil Hays <invalid@dont.spam> Subject: Re: Ones' complement addition User-Agent: Pan/0.14.2.91 (As She Crawled Across the Table) Message-Id: <pan.2007.01.14.18.03.05.867283@dont.spam> Newsgroups: comp.arch.fpga References: <eloqno$oak$1@gaudi2.UGent.be> <S8udnV-zmqshnBzYnZ2dnUVZ_oKnnZ2d@comcast.com> <elride$n1f$1@gaudi2.UGent.be> <D72dnd6Ho9JphD_YnZ2dnUVZ_vKunZ2d@comcast.com> <pan.2007.01.08.15.44.31.391071@comcast.net> <qhsleei0v3.fsf@ruckus.brouhaha.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Lines: 78 Date: Sun, 14 Jan 2007 18:00:14 GMT NNTP-Posting-Host: 71.112.133.239 X-Complaints-To: abuse@verizon.net X-Trace: trndny02 1168797614 71.112.133.239 (Sun, 14 Jan 2007 13:00:14 EST) NNTP-Posting-Date: Sun, 14 Jan 2007 13:00:14 EST Xref: prodigy.net comp.arch.fpga:125531 Eric Smith wrote: > Phil Hays wrote about a one's complement adder, which uses end-around > carry: >> It looks to me like there is a chance of a pulse running around the >> carry loop for a multiple times, perhaps even forever. > > There is no way that can happen if the inputs are stable. For an n-bit > one's complement adder, there can't be a carry propogation wider than n > bits. <Snip> please give an example of operands for which it could > occur. Let me give an example. Note that the two inputs can be anything that are inverses of each other. The first two cases are stable. 1111 +0000 +0 Carry in ====== 1111 carry out 0 Note that the carry bits are 0000 or: 1111 +0000 +1 Carry in ====== 0000 carry out 1 Note that the carry bits are 1111 With me so far? To describe an unstable case, I'm going to show only the carry bits. The inputs are the same as above. I'm assuming ripple carry, and that one time step is required for propagation of carry for one bit position. Time 0 Carry 0011 1 Carry 0110 2 Carry 1100 3 Carry 1001 4 Carry 0011 5 Carry 0110 6 Carry 1100 7 Carry 1001 8 Carry 0011 ... The sum would also not be stable. There are multiple ways to avoid the unstable cases. A few that come to mind with little effort: 1) Force the previous carry until the carry output is stable. This has the side effect of making the result (positive or negative zero) depend on the previous computation. (I think that the CDC6600 used this method) 2) If the carry look ahead shows all propagate bits are '1', then generate a carry in = '1' regardless of the carry input or any generate bits. This has the side effect of producing only positive zeros any add unless both operands were negative zeros. 3) Force a carry input of '0' until the carry output is stable. This has the side effect of never producing a positive zero output for any add unless both operands were positive zeros. (Hand computed examples often use this method) 4) Force a carry input of '1' until the carry output is stable. This has the side effect of producing only positive zeros any add unless both operands were negative zeros. -- Phil Hays (Xilinx, but posting for myself)Article: 114397
<topweaver@hotmail.com> wrote in message news:1168775744.930716.268830@a75g2000cwd.googlegroups.com... > Hi, >> While true, the obvious question would be "Why rewrite something?" > > I think in many cases the direct idea is to use a counter. When there > are not many branches, code in counter style may be more concise > because we need not to explicitly describe the next state for each > state. > Still you're not answering 'Why rewrite something?' If the current code a. Does what it needs to do b. Runs at the clock speed you need c. Doesn't take up significantly more resources then all you can hope to accomplish by rewriting is to expend effort to produce a functionally darn near similar hunk of code...maybe more concise but at this point that's probably a moot point unless points a.b.c are not the case. >> Obviously you don't have a clue about what constitutes a 'state'. A >> counter >> certainly does have a state...the value of the counter at a particular >> time. > > You are right. But the synthesis tool does not regard a counter as a > state machine, so conventionally we separate these two styles. > Synthesis tools consider counters and state machines (and many other source code input forms) to be flip flops fed by combinatorial logic. The only thing synthesis tools do 'differently' is when they encounter the user defined type definition and a signal of that type being used it knows that the particular binary encoding needed to physically synthesize the code is unimportant to the user so the synthesis tool is free to choose whatever encoding style it wants to implement the design. While user defined types are frequently used when creating state machines they can be used in other places as well in which case the synthesis tool will again be free to choose the binary encoding for signals of that type...even though they are not 'state machines'. Kevin JenningsArticle: 114398
NickHolby@googlemail.com wrote: >Nico Coesel wrote: >> This is a dangerous advice. 7400 logic is often used in an >> asynchronous way. Building a flip-flip out of NOR/NAND gates will work >> with 7400 series logic, but in an FPGA it most probably won't. >> > >Does that still stand with CPLDs? It depends on the architecture. CPLD are usually based on an output/flip-flop element preceeded by a 'sum of products' element (larger CPLDs use muxes/switch matrices to get some routing flexibilty). Because the timing to the output is well defined, it can be used as an input to a 'sum of products' element to create an asynchronous latch. An FPGA usually consists of small look-up tables to contain the logic. Each look-up table adds delay. If you have a bunch of logic around a nand/nor based flip-flop, the synthesizer will spread the logic over several look-up tables. The order and timing depends on how the optimizer and router place the logic. It may be that the reset or set conditions don't last long enough to achieve a stable output which may cause all kinds of weird effects (oscillating, not working, sometimes working) and each time you route an FPGA, the placement of the logic is different so the outcome of asynchronous logic may be different. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 114399
Does anyone out there use SDK version 8.2? I recently migrated from 8.1 where I never had an issue compiling small pieces of test code. Now however things that used to work do not. I have scoured through the Google searches and the Xilinx website to no luck. The error message that I get is very vague, and I do not quite understand what I need to do to solve the issue, the message is as follows: Building file: ../main.c gcc -O0 -Wall -c -fmessage-length=0 -omain.o ../main.c gcc: not found make: *** [main.o] Error 127 make: Target `all' not remade because of errors. Build complete for project temac_testapp Can anyone shed some light on this for me, I don't think that I have to do much to get things working, but nothing I have tried seems to help. (Go easy on me as I am just starting to use C/C++ and their compilers...simple programs...simple applications, so I'm not all that familiar with some of the terminology. I appreciate any assistance from anyone. Thanks, Jason
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z