Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On 4/24/2017 8:46 AM, David Bridgham wrote: > On Sun, 23 Apr 2017 15:51:33 -0400, rickman wrote: > >> I'm sorry but I can't picture the timing from your description. You >> have two circuits with one input in common. > > One input in common? Well crap, I screwed up the verilog. Try this > instead: > > always @(posedge interrupt_check) interrupt_detect <= interrupt_request; > > assign interrupt_ack = interrupt_detect & enable; That's what I thought. This makes sense. The answer is that in the case of Xilinx parts it is well known that the LUTs are glitchless for any one input changing. That is the real question you seem to be asking. I believe this is also true for other manufacturers, but I've never explicitly asked. The Lattice devices are derived from the Xilinx designs through a license bought from Lucent a long time ago. So their fundamental LUT design is the same and should work the same. The Altera parts are different in some ways, but I expect the aspect of the LUTs that make them glitchless is the same. This comes from using transmission gates as the logic elements in the multiplexer that selects the output from the LUT. The logic controlling the pass transistors is timed to break before make and the capacitance on the output line is enough to hold a value until the next transmission gate is turned on. So if both driven levels are the same there is no glitch. >> You then ask "Will the AND gate implementation in an FPGA do that?" I >> don't understand what you are asking. > > If an actual AND gate has an input that's 0, the output will be 0 > regards of the other input. If the other input is 0, is 1, is a clock, > or even if it's somewhere in-between because the driver has gone > metastable, the output of the AND gate will be 0. My question was, can > I depend on that from a synthesized AND gate in an FPGA? > >> Are you saying that enable is not asserted *until* at least 150 ns after >> interrupt_check rising edge? Or that enable is *held* for 150 ns after >> the interrupt_check rising edge? > > The former; enable is not asserted until at least 150ns after > interrupt_check rising edge. > >> Is the idea that interrupt_detect is not considered by other logic until >> interrupt_ack is asserted? Or is interrupt_detect supposed to be >> conditioned by enable rather than interrupt_request? > > Both, I think. Well, interrupt_detect is not used by any other logic, > only interrupt_ack. interrupt_detect is internal to just what you see > there while interrupt_ack is the signal that goes on to the rest of the > logic. > > To see more of the context of this question, I'm working on bus > arbitration circuitry for the Unibus and QBUS. I'm writing up what I'm > doing in a short paper that you can find at the URL below. The paper > isn't done yet but it's starting to get closer. > > http://www.froghouse.org/~dab/papers/bus-arbitration/ba.html I'll pass on reading the paper just now, but keep posting your progress. I'd like to catch up on this effort at some point. I used to have an LSI-11, but at some point someone convinced me to toss it out along with the 8 inch floppy drives, etc. -- Rick CArticle: 159901
On 4/24/2017 10:04 AM, David Bridgham wrote: > On Mon, 24 Apr 2017 08:30:06 -0400, Gabor wrote: > >> On the other hand, FPGAs are not really designed to do asynchronous >> sequential logic well. What you're trying to do is typically done >> using a high-speed free-running clock to sample the input signals and >> then make decisions synchronously. > > I've read this before, that FPGAs are not really suitable for > asynchronous logic. And yet, if the gates are glitch-free than I'm not > seeing the problem with doing what I'm suggesting here. Converting the > input signals to synchronous seems like a bunch of extra work for > something that ought to be fairly straightforward. > > In the larger picture, I do have a desire someday to play around with > asynchronous designs. Not this project with the QBUS but a future > project, possibly even implementing an entire processor asynchronously. > Being able to use FPGAs would sure be easier than having to get out the > wire-wrap gun. It's not so much that LUTs can't be used for asynchronous designs, rather the tools don't lend themselves to asynchronous design analysis. In fact, the tools can optimize away some of the logic unless you know how to prevent that. If you want to code at the level of LUTs, then you can do whatever you want with FPGAs. You can even use HDL, it's just a lot harder than synchronous design, a LOT harder. -- Rick CArticle: 159902
Gabor <nospam@nospam.com> wrote: ... > AND gates in an FPGA work like real AND gates. The LUTs are designed > not to glitch when inputs change, but the output should remain the same. As the claim about glitch-free properties of FPGA LUTs has surfaced here repeadetly, could you cite some vendor-specific documents where you draw such conclusions from? MarkoArticle: 159903
On 4/24/2017 12:30 PM, Marko Zec wrote: > Gabor <nospam@nospam.com> wrote: > ... >> AND gates in an FPGA work like real AND gates. The LUTs are designed >> not to glitch when inputs change, but the output should remain the same. > > As the claim about glitch-free properties of FPGA LUTs has surfaced > here repeadetly, could you cite some vendor-specific documents where > you draw such conclusions from? This doesn't seem to be documented in any data sheet or app note that I've found, but you can get verbal confirmation. Here's a link where a Xilinx representative confirms it in a forum conversation, twice. https://forums.xilinx.com/t5/7-Series-FPGAs/data-strobe-decoder-combinatorial-loops-R-S-latches-and-LUT/td-p/567448 The reason they are glitch free is because of the use of pass transistors as the muxing elements. A 4 input LUT has 16 memory elements and a number of series pass transistors. Only one path through the pass transistors is turned on at a time. The logic is timed so they are break before make. This leaves the output of the switches in a high impedance state briefly during a change and the parasitic capacitance is enough to hold the logic state. It's that simple. -- Rick CArticle: 159904
On Monday, 4/24/2017 8:46 AM, David Bridgham wrote: > On Sun, 23 Apr 2017 15:51:33 -0400, rickman wrote: > >> I'm sorry but I can't picture the timing from your description. You >> have two circuits with one input in common. > > One input in common? Well crap, I screwed up the verilog. Try this > instead: > > always @(posedge interrupt_check) interrupt_detect <= interrupt_request; > > assign interrupt_ack = interrupt_detect & enable; > >> You then ask "Will the AND gate implementation in an FPGA do that?" I >> don't understand what you are asking. > > If an actual AND gate has an input that's 0, the output will be 0 > regards of the other input. If the other input is 0, is 1, is a clock, > or even if it's somewhere in-between because the driver has gone > metastable, the output of the AND gate will be 0. My question was, can > I depend on that from a synthesized AND gate in an FPGA? > >> Are you saying that enable is not asserted *until* at least 150 ns after >> interrupt_check rising edge? Or that enable is *held* for 150 ns after >> the interrupt_check rising edge? > > The former; enable is not asserted until at least 150ns after > interrupt_check rising edge. > >> Is the idea that interrupt_detect is not considered by other logic until >> interrupt_ack is asserted? Or is interrupt_detect supposed to be >> conditioned by enable rather than interrupt_request? > > Both, I think. Well, interrupt_detect is not used by any other logic, > only interrupt_ack. interrupt_detect is internal to just what you see > there while interrupt_ack is the signal that goes on to the rest of the > logic. > > To see more of the context of this question, I'm working on bus > arbitration circuitry for the Unibus and QBUS. I'm writing up what I'm > doing in a short paper that you can find at the URL below. The paper > isn't done yet but it's starting to get closer. > > http://www.froghouse.org/~dab/papers/bus-arbitration/ba.html > I did a Unibus design back in the days of one-time programmable (fusible-link) PALs. I remember it was a bitch without using a clock, but I got away without one by using at least one delay-line. One thing I recall is that boards that actually plugged into the bus had six connector sections, where the A and B sections were not used. The pinout of the other sections was in an internal DEC document I got through some sort of third-hand source. If you look at the spec in DEC's external documentation, it only describes the A and B sections. However you quickly see that these can only be used to bring bus signals in and out of a chassis, not to go from board to board. That's because they only have a single pin for each daisy-chain signal. The Qbus was much better documented. -- GaborArticle: 159905
> always @(posedge interrupt_check) interrupt_detect <= interrupt_request; This is not a direct answer to your question, but just don't follow "@(posedge ..." by something that isn't a clock. This isn't olden times.Article: 159906
On Mon, 24 Apr 2017 12:25:46 -0400, rickman wrote: > It's not so much that LUTs can't be used for asynchronous designs, > rather the tools don't lend themselves to asynchronous design analysis. > In fact, the tools can optimize away some of the logic unless you know > how to prevent that. Oh, that's interesting. Part of my plan for doing async design is to develop my own tools as well. I thought I couldn't use FPGAs to realize my designs but if that's not the case that'd be really useful. > If you want to code at the level of LUTs, then you can do whatever you > want with FPGAs. You can even use HDL, it's just a lot harder than > synchronous design, a LOT harder. At some point, then, I'll have to learn about how one codes at the level of LUTs. That project is for the future though.Article: 159907
On Mon, 24 Apr 2017 16:48:47 +0100, Jan Coombs wrote: > I'd also like to prototype a fully asynchronous processor in > an FPGA. The Microsemi (ex Actel) Igloo/ProASIC3 parts have no > LUTs. An element of the fine grained fabric can either be a > latch or the equivalent of a LUT3. But, you may have to > hand-wire the input delays if timing is really critical? Part of my interest in async is the idea that the design can be (quasi) delay-intolerant. > It seems to me that the 2 wire 4 state logic should be fastest, > because only one of the wires needs to make a single transition > to indicate the next data phase. Yeah, the dual-rail encoding seems the best match for normal digital logic. I've seen references to one-of-four as well but I don't understand why it's better. > On-chip RAM would seem to be a problem though - any ideas?. One of my thoughts is that there are and will always be synchronous parts and systems that this would need to interface to. I want to make sure that the async dev tools do a good job of handling the transition between the two worlds.Article: 159908
On Mon, 24 Apr 2017 12:22:37 -0400, rickman wrote: > I'll pass on reading the paper just now, but keep posting your progress. > I'd like to catch up on this effort at some point. Happy to. The project itself lives on Github (look for dabridgham/QSIC) and I occasionally post updates to the cctalk mailing list.Article: 159909
On 4/24/2017 3:56 PM, David Bridgham wrote: > On Mon, 24 Apr 2017 16:48:47 +0100, Jan Coombs wrote: > >> I'd also like to prototype a fully asynchronous processor in >> an FPGA. The Microsemi (ex Actel) Igloo/ProASIC3 parts have no >> LUTs. An element of the fine grained fabric can either be a >> latch or the equivalent of a LUT3. But, you may have to >> hand-wire the input delays if timing is really critical? > > Part of my interest in async is the idea that the design can be > (quasi) delay-intolerant. I am not current on async designs, but I believe at least some async designs are self timed. This actually puts more rigorous requirements on delays as the timing path has to be slower than the logic path. But someone pointed me to info on a method of using the logic to provide the timing signal. It makes the logic much larger though. Not sure what it does to the real world, useful timing. >> It seems to me that the 2 wire 4 state logic should be fastest, >> because only one of the wires needs to make a single transition >> to indicate the next data phase. > > Yeah, the dual-rail encoding seems the best match for normal digital > logic. I've seen references to one-of-four as well but I don't > understand why it's better. I don't think you'll be able to do any of that in FPGAs, at least not soon. >> On-chip RAM would seem to be a problem though - any ideas?. > > One of my thoughts is that there are and will always be synchronous > parts and systems that this would need to interface to. I want to > make sure that the async dev tools do a good job of handling the > transition between the two worlds. I believe the reason they started using sync RAMs in FPGAs was more about the user interface that it was the technology. Users were abusing the timing of async RAMs, so they gave them a sync interface which is harder to abuse. I think distributed RAM is still async on read and that should be good enough since even async RAM is really synchronous on writes, just not edge driven. -- Rick CArticle: 159910
On 4/24/2017 3:51 PM, David Bridgham wrote: > On Mon, 24 Apr 2017 12:25:46 -0400, rickman wrote: > >> It's not so much that LUTs can't be used for asynchronous designs, >> rather the tools don't lend themselves to asynchronous design analysis. >> In fact, the tools can optimize away some of the logic unless you know >> how to prevent that. > > Oh, that's interesting. Part of my plan for doing async design is to > develop my own tools as well. I thought I couldn't use FPGAs to realize > my designs but if that's not the case that'd be really useful. > >> If you want to code at the level of LUTs, then you can do whatever you >> want with FPGAs. You can even use HDL, it's just a lot harder than >> synchronous design, a LOT harder. > > At some point, then, I'll have to learn about how one codes at the level > of LUTs. That project is for the future though. One way is to write behavioral HDL code and apply attributes to keep signals as wires and to not combine with other logic. In some FPGA families you can instantiate LUTs. I haven't done any of this in decades, so I don't know anything about it anymore. Someone was posting about this not too long ago, maybe earlier this year. -- Rick CArticle: 159911
> Oh, that's interesting. Part of my plan for doing async design is to > develop my own tools as well. I thought I couldn't use FPGAs to realize > my designs but if that's not the case that'd be really useful. > > > If you want to code at the level of LUTs, then you can do whatever you > > want with FPGAs. You can even use HDL, it's just a lot harder than > > synchronous design, a LOT harder. > > At some point, then, I'll have to learn about how one codes at the level > of LUTs. That project is for the future though. I had to instantiate LUTs recently (as a last resort) and it was pretty straightforward. But why would you want to do this? Just resync your async signals to a clock. Writing your own tools is probably quixotic.Article: 159912
I've been researching cyclotomic FFTs. There is a lot of literature (Fedor= enko, Trifinov, Costa, etc.) about using this technique for calculating the= syndrome and doing the Chien Search in Reed-Solomon decoders. I'd like to= know if anybody has actually used this in hardware. The literature makes = bold claims about the massive 100x efficiency gains, but it seems like it's= only useful in software. As far as I can tell, the advantage in hardware = for a highly-parallel decoder (e.g., 64 symbols per cycle) over the convent= ional Horner Method is nonexistent.Article: 159913
On Mon, 24 Apr 2017 13:03:17 -0400, rickman wrote: > On 4/24/2017 12:30 PM, Marko Zec wrote: >> Gabor <nospam@nospam.com> wrote: >> ... >>> AND gates in an FPGA work like real AND gates. The LUTs are designed >>> not to glitch when inputs change, but the output should remain the >>> same. >> >> As the claim about glitch-free properties of FPGA LUTs has surfaced >> here repeadetly, could you cite some vendor-specific documents where >> you draw such conclusions from? > > This doesn't seem to be documented in any data sheet or app note that > I've found, but you can get verbal confirmation. Here's a link where a > Xilinx representative confirms it in a forum conversation, twice. > > https://forums.xilinx.com/t5/7-Series-FPGAs/data-strobe-decoder- combinatorial-loops-R-S-latches-and-LUT/td-p/567448 > > > The reason they are glitch free is because of the use of pass > transistors as the muxing elements. A 4 input LUT has 16 memory > elements and a number of series pass transistors. Only one path through > the pass transistors is turned on at a time. The logic is timed so they > are break before make. This leaves the output of the switches in a high > impedance state briefly during a change and the parasitic capacitance is > enough to hold the logic state. It's that simple. It's in XAPP024.pdf, which doesn't seem to be on Xilinx's web site. This is for the XC3000 series. I understand that more recent series (about 10 generations now!) behave in a similar manner, but you won't find any documentation saying that it's so. Here's something written by Peter Alfke in a this thread from 2001: https://groups.google.com/d/msg/comp.arch.fpga/F6AYhYzDmi0/w14s0Cuw-x8J "Here is what I wrote ten years ago ( you can find it, among other places, in the 1994 data book, page 9-5: "Function Generator Avoids Glitches ... Note that there can never be a decoding glitch when only one select input changes. Even a non-overlapping decoder cannot generate a glitch problem, since the node capacitance would retain the previous logic level... When more than one input changes "simultaneously", the user should analyze the logic output for any intermediate code. If any such code produces a different result, the user must assume that such a glitch might occur, and must make the system design immune to it... If none of the address codes contained in the "simultaneously" changing inputs produces a different output, the user can be sure that there will be no glitch...." This still applies today. Peter Alfke, Xilinx Applications =============================================" Regards, AllanArticle: 159914
On Mon, 24 Apr 2017 16:54:55 -0400, rickman wrote: > I am not current on async designs, but I believe at least some async > designs are self timed. This actually puts more rigorous requirements > on delays as the timing path has to be slower than the logic path. There are two general ways I know to provide async timing. One is to have timing circuits run in parallel with the logic circuits and this implies the timing requirements you mention here. > But someone pointed me to info on a method of using the logic to > provide the timing signal. It makes the logic much larger though. > Not sure what it does to the real world, useful timing. The other way to provide timing is to have the logic signals themselves come with their own, inherent validity signal. Delay insensitive (or quasi-delay insensitive) is the name for this idea and the dual-rail encoding that I referred to is one way to implement that. It definitely results in larger logic circuits but proponents argue that that's balanced by no longer needing all the complication of carefully tuned H-trees for clock distribution. Obviously FPGAs have the clock distribution networks already so using an FPGA to implement async design doesn't see any benefit on that side of things. I'm interested in FPGAs for this only because they're a really convenient source of programmable logic. I can swing programming an FPGA; I can't manage doing my own custom VLSI. FPGAs could let me experiment with the idea which is really my intermediate goal.Article: 159915
On Mon, 24 Apr 2017 16:36:15 -0700, Kevin Neilson wrote: > Writing your own tools is probably quixotic. No question that my own design tools (also a simulator) is quixotic. Gotta dream big.Article: 159916
> The other way to provide timing is to have the logic signals themselves > come with their own, inherent validity signal. Delay insensitive (or > quasi-delay insensitive) is the name for this idea and the dual-rail > encoding that I referred to is one way to implement that. > I don't know what you are trying to do but that description almost sounds like what Achronix was doing when they started out. You'll observe, that despite their name, they eventually switched to synchronous logic.Article: 159917
On 4/26/2017 4:38 PM, Kevin Neilson wrote: >> The other way to provide timing is to have the logic signals >> themselves come with their own, inherent validity signal. Delay >> insensitive (or quasi-delay insensitive) is the name for this idea >> and the dual-rail encoding that I referred to is one way to >> implement that. >> > I don't know what you are trying to do but that description almost > sounds like what Achronix was doing when they started out. You'll > observe, that despite their name, they eventually switched to > synchronous logic. When push came to shove the async logic aspect was not as important as it was to just make fast FPGAs which Achronix did by getting a leg up on the major FPGA players by using Intel's 22 nm FINFET technology. The part I don't get is that Intel has bought Altera, so the fab service contract with Achronix is now competing with another Intel arm (pun intended). -- Rick CArticle: 159918
On Wed, 26 Apr 2017 13:38:11 -0700, Kevin Neilson wrote: > I don't know what you are trying to do but that description almost > sounds like what Achronix was doing when they started out. You'll > observe, that despite their name, they eventually switched to > synchronous logic. I'm not familiar with Achronix but I know of one other asynchronous FPGA project that ended up being flushed. Certainly this line of argument is similar to a friend's who keeps pointing out to me that, while async was more common back in the day, the entire industry has gone synchronous since sometime back in the 1960s and so there must be a reason. Yep, probably. What I'm thinking of is more along the lines of the Balsa synthesis system from University of Manchester. http://apt.cs.manchester.ac.uk/ftp/pub/apt/papers/FDL00.pdf http://apt.cs.manchester.ac.uk/ftp/pub/apt/balsa/Article: 159919
> more common back in the day, the entire industry has gone synchronous > since sometime back in the 1960s and so there must be a reason. Yep, > probably. > > What I'm thinking of is more along the lines of the Balsa synthesis > system from University of Manchester. OK, as long as you know what you're getting into. If you want to publish papers, that's great, but if you have something to deliver on a deadline, I'd avoid this whole line of thought...Article: 159920
I don't recall where, but there was a conversation recently about using the RISC-V in FPGAs. Thought I'd pass on the link. https://www.microsemi.com/products/fpga-soc/technology-solutions/embedded-processing/risc-v -- Rick CArticle: 159921
On Fri, 28 Apr 2017 14:54:06 -0400, rickman wrote: > I don't recall where, but there was a conversation recently about using > the RISC-V in FPGAs. Thought I'd pass on the link. > > https://www.microsemi.com/products/fpga-soc/technology-solutions/ embedded-processing/risc-v Ooh, cool. I'm going to have to keep my eye on the RISC-V. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.comArticle: 159922
On 4/28/2017 9:56 PM, Tim Wescott wrote: > On Fri, 28 Apr 2017 14:54:06 -0400, rickman wrote: > >> I don't recall where, but there was a conversation recently about using >> the RISC-V in FPGAs. Thought I'd pass on the link. >> >> https://www.microsemi.com/products/fpga-soc/technology-solutions/ > embedded-processing/risc-v > > Ooh, cool. I'm going to have to keep my eye on the RISC-V. I don't know how small the RISC-V can be made. I know there is a version designed in an ASIC that can compete with the ARM CPUs and there are more than one version for FPGAs. I would hope they had a version similar to the ARM CM-1 which is specifically targeted to programmable logic and not overly large. I haven't seen any indication this exists, but it is hard to find this type of info. Or I'm just not looking in the right places. -- Rick CArticle: 159923
On Sat, 29 Apr 2017 23:04:28 -0400, rickman wrote: > On 4/28/2017 9:56 PM, Tim Wescott wrote: >> On Fri, 28 Apr 2017 14:54:06 -0400, rickman wrote: >> >>> I don't recall where, but there was a conversation recently about >>> using the RISC-V in FPGAs. Thought I'd pass on the link. >>> >>> https://www.microsemi.com/products/fpga-soc/technology-solutions/ >> embedded-processing/risc-v >> >> Ooh, cool. I'm going to have to keep my eye on the RISC-V. > > I don't know how small the RISC-V can be made. I know there is a > version designed in an ASIC that can compete with the ARM CPUs and there > are more than one version for FPGAs. I would hope they had a version > similar to the ARM CM-1 which is specifically targeted to programmable > logic and not overly large. I haven't seen any indication this exists, > but it is hard to find this type of info. Or I'm just not looking in > the right places. They claim to have a minimal variant of the instruction set, which would presumably be an FPGA-ish sort of thing. But I only read about 20 pages into the instruction set document. It's OPEN SOURCE! You could get cracking and make one!! (Well, so could I, theoretically, if I were insane). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!Article: 159924
On Sat, 29 Apr 2017 23:04:28 -0400, rickman wrote: > On 4/28/2017 9:56 PM, Tim Wescott wrote: >> On Fri, 28 Apr 2017 14:54:06 -0400, rickman wrote: >> >>> I don't recall where, but there was a conversation recently about >>> using the RISC-V in FPGAs. Thought I'd pass on the link. >>> >>> https://www.microsemi.com/products/fpga-soc/technology-solutions/ >> embedded-processing/risc-v >> >> Ooh, cool. I'm going to have to keep my eye on the RISC-V. > > I don't know how small the RISC-V can be made. I know there is a > version designed in an ASIC that can compete with the ARM CPUs and there > are more than one version for FPGAs. I would hope they had a version > similar to the ARM CM-1 which is specifically targeted to programmable > logic and not overly large. I haven't seen any indication this exists, > but it is hard to find this type of info. Or I'm just not looking in > the right places. Googling RISC-V FPGA implementation got lots of hits. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z