Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Monday, May 30, 2016 at 4:49:05 PM UTC-4, rickman wrote: > On 5/30/2016 1:02 PM, Cecil Bayona wrote: > > On 5/30/2016 11:41 AM, Rick C. Hodgin wrote: > >> On Monday, May 30, 2016 at 12:03:28 PM UTC-4, Rick C. Hodgin wrote: > >>> On Monday, May 30, 2016 at 11:54:11 AM UTC-4, Rick C. Hodgin wrote: > > > >>>> > >>>> https://github.com/DRuffer/ep8080/tree/master/ep80 > >>> > >>> I may be missing an obvious link, but if anybody knows where I can get > >>> the PPT files used in these presentations, please pot a link: > >>> > >>> ep8080 architecture morning sessions: > >>> Feb.27.2016: https://www.youtube.com/watch?v=-DYKuBmSGaE > >>> Mar.26.2016: https://www.youtube.com/watch?v=XO0VqKhsPQE > >>> Apr.23.2016: https://www.youtube.com/watch?v=s9cnnPiQtn8 > >> > >> Also, if anyone has a block diagram or logical component layout of some > >> kind, one which shows the internal components and how they are all hooked > >> up through this ep8080 design, please post that info as well. > >> > >> Best regards, > >> Rick C. Hodgin > >> > > > > I would also be interested in those items, there are several nice > > looking soft CPUs available for use with Forth , the common thread among > > is lack of documentation. > > The best way to learn about the structure of the ep8080 would be to draw > a block diagram from the VHDL code. That's not good advice for everyone. I have dyslexia, for example, and have a very difficult time comprehending written text. And since I don't know VHDL, I was hoping to go the other way from your suggestion, to be able to look at a block diagram and understand the connectivity in that design / drawing form, and then look at the VHDL code and teach myself its verbose form that way. Best regards, Rick C. HodginArticle: 158976
On Monday, May 30, 2016 at 5:19:36 PM UTC-5, Rick C. Hodgin wrote: > On Monday, May 30, 2016 at 4:49:05 PM UTC-4, rickman wrote: > > On 5/30/2016 1:02 PM, Cecil Bayona wrote: > > > On 5/30/2016 11:41 AM, Rick C. Hodgin wrote: > > >> On Monday, May 30, 2016 at 12:03:28 PM UTC-4, Rick C. Hodgin wrote: > > >>> On Monday, May 30, 2016 at 11:54:11 AM UTC-4, Rick C. Hodgin wrote: > > > > > >>>> > > >>>> https://github.com/DRuffer/ep8080/tree/master/ep80 > > >>> > > >>> I may be missing an obvious link, but if anybody knows where I can get > > >>> the PPT files used in these presentations, please pot a link: > > >>> > > >>> ep8080 architecture morning sessions: > > >>> Feb.27.2016: https://www.youtube.com/watch?v=-DYKuBmSGaE > > >>> Mar.26.2016: https://www.youtube.com/watch?v=XO0VqKhsPQE > > >>> Apr.23.2016: https://www.youtube.com/watch?v=s9cnnPiQtn8 > > >> > > >> Also, if anyone has a block diagram or logical component layout of some > > >> kind, one which shows the internal components and how they are all hooked > > >> up through this ep8080 design, please post that info as well. > > >> > > >> Best regards, > > >> Rick C. Hodgin > > >> > > > > > > I would also be interested in those items, there are several nice > > > looking soft CPUs available for use with Forth , the common thread among > > > is lack of documentation. > > > > The best way to learn about the structure of the ep8080 would be to draw > > a block diagram from the VHDL code. > > That's not good advice for everyone. I have dyslexia, for example, and > have a very difficult time comprehending written text. And since I don't > know VHDL, I was hoping to go the other way from your suggestion, to be > able to look at a block diagram and understand the connectivity in that > design / drawing form, and then look at the VHDL code and teach myself > its verbose form that way. > > Best regards, > Rick C. Hodgin ]>That's not good advice for everyone. Am not the world's fastest or best VHDL coder, this is what I do: Write a description of what you want to do. In the case of a soft core, include the instruction set & formats, rational, implementation decisions, ... Do a spreadsheet with one or more rows for each instruction. Create columns for anything involved in the implementation Mnemonics, their binary, registers accessed, registers modified, calculations, ... For instructions with multiple clocks, either a set of columns for each clock or multiple rows. Choose some naming scheme for the signals and registers Write the VHDL. You can optimize it later (merging adders that have similar inputs optimizes well) And only write what you are ready to test (using a short program) Now prefer not to do a data flow diagram (tends to result in too many signal names) Track resource utilization as instructions are coded Test the VHDL both in simulation and on the FPGA evaluation board. Jim BrakefieldArticle: 158977
One thing I haven't seen mentioned is Jan Gray's articles on the xr16 he de= signed, implemented, and wrote a C compiler for. It's not the most up-to-da= te design, in that it's designed for a (by modern standards) very old proce= ssor, but it's a worked-example of designing a cpu, ultimately an SOC, and = even the software development environment for it. It's written in verilog. Start at http://www.fpgacpu.org/xsoc/xr16.html and look for the verilog ver= sion (IIRC, the original was a schematic design). Cheers SimonArticle: 158978
On 5/30/2016 9:34 PM, jim.brakefield@ieee.org wrote: > > Write the VHDL. > You can optimize it later > (merging adders that have similar inputs optimizes well) > And only write what you are ready to test (using a short program) > Now prefer not to do a data flow diagram > (tends to result in too many signal names) > Track resource utilization as instructions are coded > > Test the VHDL both in simulation and on the FPGA evaluation board. You forgot timing analysis. I prepared a presentation on test benches for Dr. Ting's workshop, but I wish I had included a mention of static timing analysis. I believe Ting talked another time about the design not running at 50 MHz as he had hoped but ran at 25. This is an issue that could be explored by a static timing analysis most efficiently. Trying to analyze timing paths by post route simulation is very labor intensive. Trying to debug anything in a real chip is even harder and should not be done until simulation and static timing analysis have wrung out the design as much as possible. That is the background that introduces the need for good test benches. -- Rick CArticle: 158979
On 5/30/2016 11:34 PM, rickman wrote: > On 5/30/2016 9:34 PM, jim.brakefield@ieee.org wrote: >> >> Write the VHDL. >> You can optimize it later >> (merging adders that have similar inputs optimizes well) >> And only write what you are ready to test (using a short program) >> Now prefer not to do a data flow diagram >> (tends to result in too many signal names) >> Track resource utilization as instructions are coded >> >> Test the VHDL both in simulation and on the FPGA evaluation board. > > You forgot timing analysis. I prepared a presentation on test benches > for Dr. Ting's workshop, but I wish I had included a mention of static > timing analysis. I believe Ting talked another time about the design > not running at 50 MHz as he had hoped but ran at 25. This is an issue > that could be explored by a static timing analysis most efficiently. > Trying to analyze timing paths by post route simulation is very labor > intensive. Trying to debug anything in a real chip is even harder and > should not be done until simulation and static timing analysis have > wrung out the design as much as possible. That is the background that > introduces the need for good test benches. > When I setup the project and compiled it, it gave warning on issues with the clock possibly being delayed in some sections of the circuit, so it's likely that he has timing issues. -- Cecil - k5nwaArticle: 158980
On 5/31/2016 12:41 AM, Cecil Bayona wrote: > On 5/30/2016 11:34 PM, rickman wrote: >> On 5/30/2016 9:34 PM, jim.brakefield@ieee.org wrote: >>> >>> Write the VHDL. >>> You can optimize it later >>> (merging adders that have similar inputs optimizes well) >>> And only write what you are ready to test (using a short program) >>> Now prefer not to do a data flow diagram >>> (tends to result in too many signal names) >>> Track resource utilization as instructions are coded >>> >>> Test the VHDL both in simulation and on the FPGA evaluation board. >> >> You forgot timing analysis. I prepared a presentation on test benches >> for Dr. Ting's workshop, but I wish I had included a mention of static >> timing analysis. I believe Ting talked another time about the design >> not running at 50 MHz as he had hoped but ran at 25. This is an issue >> that could be explored by a static timing analysis most efficiently. >> Trying to analyze timing paths by post route simulation is very labor >> intensive. Trying to debug anything in a real chip is even harder and >> should not be done until simulation and static timing analysis have >> wrung out the design as much as possible. That is the background that >> introduces the need for good test benches. >> > When I setup the project and compiled it, it gave warning on issues with > the clock possibly being delayed in some sections of the circuit, so > it's likely that he has timing issues. I looked and don't see any real problems with the clock circuit. In VHDL a clock buffered results in a delta delay which can mess up a simulation. It won't hurt a real circuit since the buffer won't be implemented in logic. It can *really* mess up a simulation though. There is also an inversion of the clock which I don't understand, but again, it likely also won't be implemented in an FPGA since they typically have hardware to select the clock edge. -- Rick CArticle: 158981
Ilya Kalistru wrote: > They advise it for a reason. In big and complex designs big reset > network with high fanout dramatically decrease maximum achievable > frequency. That's only part of the reason. The other part is that every FF, every BRAM, every component of the FPGA is guaranteed by design to come up as '0' at power up (after configuration is complete). So their claim is that a reset (at least a global power-up reset) is simply unneccessary and only maybe needed for things you do not wish to start up at '0' (like, maybe a FSM state variable that dictates the initial state of an FSM). And even in these cases it's not really needed, since the Xilinx tools honor signal initialization values (in VHDL), and BRAMs can be pre-loaded also. So you can be absolutely sure how every component in the FPGA comes up after power-up, without having to use a reset signal. You can forget about the resources the global reset signal needs, pipelining or how to code it properly because it plain and simple is useless and unnecessary in most cases.* If you need to set FFs or so to specific values after power-up, then that's a set, not a reset. Different port on the FF, different scenario, and certainly needed in a lot less occasions/signals, hence a signal with much smaller fanout. * = That's their claim, not necessarily my personal view...Article: 158982
On 5/31/2016 8:09 AM, Sean Durkin wrote: > Ilya Kalistru wrote: >> They advise it for a reason. In big and complex designs big reset >> network with high fanout dramatically decrease maximum achievable >> frequency. > > That's only part of the reason. The other part is that every FF, every > BRAM, every component of the FPGA is guaranteed by design to come up as > '0' at power up (after configuration is complete). So their claim is > that a reset (at least a global power-up reset) is simply unneccessary > and only maybe needed for things you do not wish to start up at '0' > (like, maybe a FSM state variable that dictates the initial state of an > FSM). And even in these cases it's not really needed, since the Xilinx > tools honor signal initialization values (in VHDL), and BRAMs can be > pre-loaded also. So you can be absolutely sure how every component in > the FPGA comes up after power-up, without having to use a reset signal. > > You can forget about the resources the global reset signal needs, > pipelining or how to code it properly because it plain and simple is > useless and unnecessary in most cases.* > If you need to set FFs or so to specific values after power-up, then > that's a set, not a reset. Different port on the FF, different scenario, > and certainly needed in a lot less occasions/signals, hence a signal > with much smaller fanout. > > * = That's their claim, not necessarily my personal view... I don't believe Xilinx or any other FPGA vendor makes that claim. First, the reset from configuration is done via the global set/reset signal (GSR) which covers the entire chip like the clock signals, but without the drive tree. The problem with this is the relatively weak drive which results in a slow propagation time. So there is no guarantee that it will meet setup/hold times on any given FF when coming *out* of reset. The result is you must treat this signal as asynchronous to the clocks in the chip and each section of clocked logic should be designed accordingly. The good part is that it does not use any of the conventional routing resources and so is otherwise "free". It doesn't matter of a device is being set or reset by the GSR. The programmable inverter is in the FF logic and so is also "free". -- Rick CArticle: 158983
On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: > On 5/31/2016 8:09 AM, Sean Durkin wrote: >> Ilya Kalistru wrote: >>> They advise it for a reason. In big and complex designs big reset >>> network with high fanout dramatically decrease maximum achievable >>> frequency. >> >> That's only part of the reason. The other part is that every FF, every >> BRAM, every component of the FPGA is guaranteed by design to come up as >> '0' at power up (after configuration is complete). So their claim is >> that a reset (at least a global power-up reset) is simply unneccessary >> and only maybe needed for things you do not wish to start up at '0' >> (like, maybe a FSM state variable that dictates the initial state of an >> FSM). And even in these cases it's not really needed, since the Xilinx >> tools honor signal initialization values (in VHDL), and BRAMs can be >> pre-loaded also. So you can be absolutely sure how every component in >> the FPGA comes up after power-up, without having to use a reset signal. >> >> You can forget about the resources the global reset signal needs, >> pipelining or how to code it properly because it plain and simple is >> useless and unnecessary in most cases.* >> If you need to set FFs or so to specific values after power-up, then >> that's a set, not a reset. Different port on the FF, different >> scenario, >> and certainly needed in a lot less occasions/signals, hence a signal >> with much smaller fanout. >> >> * = That's their claim, not necessarily my personal view... > > I don't believe Xilinx or any other FPGA vendor makes that claim. It seems they do (at least Ken Chapman does) make that claim. Xilinx WP272: "applying a global reset to your FPGA designs is not a very good idea and should be avoided" AllanArticle: 158984
On 5/31/2016 9:31 AM, Allan Herriman wrote: > On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: > >> On 5/31/2016 8:09 AM, Sean Durkin wrote: >>> Ilya Kalistru wrote: >>>> They advise it for a reason. In big and complex designs big reset >>>> network with high fanout dramatically decrease maximum achievable >>>> frequency. >>> >>> That's only part of the reason. The other part is that every FF, every >>> BRAM, every component of the FPGA is guaranteed by design to come up as >>> '0' at power up (after configuration is complete). So their claim is >>> that a reset (at least a global power-up reset) is simply unneccessary >>> and only maybe needed for things you do not wish to start up at '0' >>> (like, maybe a FSM state variable that dictates the initial state of an >>> FSM). And even in these cases it's not really needed, since the Xilinx >>> tools honor signal initialization values (in VHDL), and BRAMs can be >>> pre-loaded also. So you can be absolutely sure how every component in >>> the FPGA comes up after power-up, without having to use a reset signal. >>> >>> You can forget about the resources the global reset signal needs, >>> pipelining or how to code it properly because it plain and simple is >>> useless and unnecessary in most cases.* >>> If you need to set FFs or so to specific values after power-up, then >>> that's a set, not a reset. Different port on the FF, different >>> scenario, >>> and certainly needed in a lot less occasions/signals, hence a signal >>> with much smaller fanout. >>> >>> * = That's their claim, not necessarily my personal view... >> >> I don't believe Xilinx or any other FPGA vendor makes that claim. > > > It seems they do (at least Ken Chapman does) make that claim. > > Xilinx WP272: > "applying a global reset to your FPGA designs is not a very good > idea and should be avoided" We are miscommunicating. I thought Sean was saying Xilinx was claiming a proper reset was not needed. If so, I'd love to read the details on how they justify that claim. Sean was saying the configuration reset is adequate, which is not correct for most designs (which uses the GSR). Yes, every FF is guaranteed to be set to a known state, but since the max delay is typically greater than the clock cycle used, this signal much be considered to be async with the clock which means you have to code with this in mind. Since every Xilinx FPGA uses the GSR to put the chip in a defined state, I'm not sure what Ken Chapman is really saying. If you use a global set/reset signal in your design it will be replaced by the GSR signal, so it is used by default whether or not you infer it. -- Rick CArticle: 158985
On Monday, May 30, 2016 at 9:34:18 PM UTC-4, jim.bra...@ieee.org wrote: > On Monday, May 30, 2016 at 5:19:36 PM UTC-5, Rick C. Hodgin wrote: > > On Monday, May 30, 2016 at 4:49:05 PM UTC-4, rickman wrote: > > > On 5/30/2016 1:02 PM, Cecil Bayona wrote: > > > > On 5/30/2016 11:41 AM, Rick C. Hodgin wrote: > > > >> On Monday, May 30, 2016 at 12:03:28 PM UTC-4, Rick C. Hodgin wrote: > > > >>> On Monday, May 30, 2016 at 11:54:11 AM UTC-4, Rick C. Hodgin wrote: > > > > > > > >>>> > > > >>>> https://github.com/DRuffer/ep8080/tree/master/ep80 > > > >>> > > > >>> I may be missing an obvious link, but if anybody knows where I can get > > > >>> the PPT files used in these presentations, please pot a link: > > > >>> > > > >>> ep8080 architecture morning sessions: > > > >>> Feb.27.2016: https://www.youtube.com/watch?v=-DYKuBmSGaE > > > >>> Mar.26.2016: https://www.youtube.com/watch?v=XO0VqKhsPQE > > > >>> Apr.23.2016: https://www.youtube.com/watch?v=s9cnnPiQtn8 > > > >> > > > >> Also, if anyone has a block diagram or logical component layout of some > > > >> kind, one which shows the internal components and how they are all hooked > > > >> up through this ep8080 design, please post that info as well. > > > >> > > > >> Best regards, > > > >> Rick C. Hodgin > > > >> > > > > > > > > I would also be interested in those items, there are several nice > > > > looking soft CPUs available for use with Forth , the common thread among > > > > is lack of documentation. > > > > > > The best way to learn about the structure of the ep8080 would be to draw > > > a block diagram from the VHDL code. > > > > That's not good advice for everyone. I have dyslexia, for example, and > > have a very difficult time comprehending written text. And since I don't > > know VHDL, I was hoping to go the other way from your suggestion, to be > > able to look at a block diagram and understand the connectivity in that > > design / drawing form, and then look at the VHDL code and teach myself > > its verbose form that way. > > > > Best regards, > > Rick C. Hodgin > > ]>That's not good advice for everyone. > > Am not the world's fastest or best VHDL coder, this is what I do: > > Write a description of what you want to do. > In the case of a soft core, include the instruction set & formats, rational, implementation decisions, ... > > Do a spreadsheet with one or more rows for each instruction. > Create columns for anything involved in the implementation > Mnemonics, their binary, registers accessed, registers modified, calculations, ... > For instructions with multiple clocks, either a set of columns for each clock or multiple rows. > > Choose some naming scheme for the signals and registers > > Write the VHDL. > You can optimize it later > (merging adders that have similar inputs optimizes well) > And only write what you are ready to test (using a short program) > Now prefer not to do a data flow diagram > (tends to result in too many signal names) > Track resource utilization as instructions are coded > > Test the VHDL both in simulation and on the FPGA evaluation board. > > Jim Brakefield Thank you for your input, Jim. It's appreciated. It's hard for me to operate in a theater which uses a lot of words. I can do it, but it takes a lot of effort and is very mentally taxing. I also make a lot of mistakes in reading (and subsequent comprehension) like that. It's really quite amazing sometimes what I read compared to what's really there. Sometimes they are completely separate meanings. On my designs, I often go into a spreadsheet or GIMP and begin the design with separate components (using shapes, color blocks, outlines, etc.), which provide visual cues (rather than words) for that reason. My brain can isolate and identify the separate components much better that way. It winds up being much easier for me to understand things in images and their related diagrams than in those which just have words. Some diagrams are also confusing though. Typically it's those being mostly words, or words in certain fonts (depending on how they were created), but not all of them. And even then they're usually better in some ways at least. ----- I'll track it down. If it doesn't exist, I may skip the ep8080 and go to another CPU ... possibly the 6502 as it's had a lot of reconstruction to create its entire gate layout: http://www.visual6502.org/ http://www.visual6502.org/JSSim/index.html And they have a high-speed C gate simulator which is a soft 6502 that runs at about 1/4 speed on an 8-core machine: https://github.com/mist64/perfect6502 I may also just start with my Oppie-1 design: https://github.com/RickCHodgin/libsf/blob/master/li386/oppie/oppie-1.png https://github.com/RickCHodgin/libsf/blob/master/li386/oppie/oppie1/debo-1-actual.png https://github.com/RickCHodgin/libsf/blob/master/li386/oppie/oppie1/cpp_simulation/oppie1_lasm/test1.asm https://github.com/RickCHodgin/libsf/blob/master/li386/oppie/oppie1/oppie1.v It's a very simple core with discrete stages which all operate in a single clock cycle, so it's very straight-forward. ----- I do like the Lattice Diamond software. It's awesome actually. Very fast. Nice simulator. I prefer it to Altera's Quartus II so far, but that may only be because I don't yet know how to use Quartus II well enough. Best regards, Rick C. HodginArticle: 158986
rickman wrote: > On 5/27/2016 12:57 PM, Rob Gaddi wrote: >> http://www.xess.com/blog/extinction-level-event/ >> >> The synopsis of the guy's argument is: Given Intel bought Altera, and >> rumors that *comm is eyeing Xilinx, that's likely to shift the focus of >> both tier 1 FPGA companies to datacenters and away from traditional >> programmable logic. >> >> We need a good Friday thread, it's been a while. What do y'all think? > > I think the article overstates the dichotomy of supporting the > traditional FPGA market and the datacenter market. Will FPGAs need to > vary so much to support one market vs. the other? They are hugely > programmable. Neither X nor A have pushed much on alternate > architectures that might be a significant advantage in a particular > market. I think it will mostly be steady as she goes with most of the > changes in marketing rather than engineering. > Not sure if it's the case in functionality, but I'm definitely seeing shift in market focus. It used to be that I could go to any of my distributors and have FPGAs that were effectively guaranteed to be stocking; ship two days later. Now no one stocks, I have to keep a fairly large buffer of parts on-site locally, and all of the interest seems to be in making bulk sales to large customers rather than fanning out to small ones. To what degree that's a function of the shifting vision of the FPGA vendors or not, that's opaque to me. But it's definitely real. -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.Article: 158987
In article <280c1ce8-b623-4e61-ab4e-969be974d29e@googlegroups.com>, Ilya Kalistru <stebanoid@gmail.com> wrote: > >> Can't this be solved by a pipelined reset? >> With each step you could increase the fan-out significantly. > >We could come up with a bunch of ideas how to solve this problem in different >situations but it's much wiser just not to create the problem if you can. Which is *precisely* why I reset almost everything. The argument is that "resets" are expensive may be true. Reset trees are expensive, and may be overkill. But first-pass success, and not having latent (and hard to find bugs) trumps this for me. Reset and initialization problems can be the devil to find and debug. I'd rather have a correct design first, rather than an optimal one. (Maybe my industry can tolerate this more). I just think Xilinx emphasizes the "don't reset everything" way too much - and actually doesn't spend much effort on the other side trying to create a better/more efficient reset mechanisms in their technology and software. They think it's a training problem not a technology one. Regards, MarkArticle: 158988
Den tirsdag den 31. maj 2016 kl. 15.45.47 UTC+2 skrev rickman: > On 5/31/2016 9:31 AM, Allan Herriman wrote: > > On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: > > > >> On 5/31/2016 8:09 AM, Sean Durkin wrote: > >>> Ilya Kalistru wrote: > >>>> They advise it for a reason. In big and complex designs big reset > >>>> network with high fanout dramatically decrease maximum achievable > >>>> frequency. > >>> > >>> That's only part of the reason. The other part is that every FF, every > >>> BRAM, every component of the FPGA is guaranteed by design to come up as > >>> '0' at power up (after configuration is complete). So their claim is > >>> that a reset (at least a global power-up reset) is simply unneccessary > >>> and only maybe needed for things you do not wish to start up at '0' > >>> (like, maybe a FSM state variable that dictates the initial state of an > >>> FSM). And even in these cases it's not really needed, since the Xilinx > >>> tools honor signal initialization values (in VHDL), and BRAMs can be > >>> pre-loaded also. So you can be absolutely sure how every component in > >>> the FPGA comes up after power-up, without having to use a reset signal. > >>> > >>> You can forget about the resources the global reset signal needs, > >>> pipelining or how to code it properly because it plain and simple is > >>> useless and unnecessary in most cases.* > >>> If you need to set FFs or so to specific values after power-up, then > >>> that's a set, not a reset. Different port on the FF, different > >>> scenario, > >>> and certainly needed in a lot less occasions/signals, hence a signal > >>> with much smaller fanout. > >>> > >>> * = That's their claim, not necessarily my personal view... > >> > >> I don't believe Xilinx or any other FPGA vendor makes that claim. > > > > > > It seems they do (at least Ken Chapman does) make that claim. > > > > Xilinx WP272: > > "applying a global reset to your FPGA designs is not a very good > > idea and should be avoided" > > We are miscommunicating. I thought Sean was saying Xilinx was claiming > a proper reset was not needed. If so, I'd love to read the details on > how they justify that claim. Sean was saying the configuration reset is > adequate, which is not correct for most designs (which uses the GSR). > Yes, every FF is guaranteed to be set to a known state, but since the > max delay is typically greater than the clock cycle used, this signal > much be considered to be async with the clock which means you have to > code with this in mind. should be easy to handle by using a BUFGCE and an SRL16 to keep the clock stopped until 16 cycles after reset -LasseArticle: 158989
In article <nik4i8$fhm$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >On 5/31/2016 9:31 AM, Allan Herriman wrote: >> On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: >> >>> On 5/31/2016 8:09 AM, Sean Durkin wrote: >>>> Ilya Kalistru wrote: >>>>> They advise it for a reason. In big and complex designs big reset >>>>> network with high fanout dramatically decrease maximum achievable >>>>> frequency. >>>> >>>> That's only part of the reason. The other part is that every FF, every >>>> BRAM, every component of the FPGA is guaranteed by design to come up as >>>> '0' at power up (after configuration is complete). So their claim is >>>> that a reset (at least a global power-up reset) is simply unneccessary >>>> and only maybe needed for things you do not wish to start up at '0' >>>> (like, maybe a FSM state variable that dictates the initial state of an >>>> FSM). And even in these cases it's not really needed, since the Xilinx >>>> tools honor signal initialization values (in VHDL), and BRAMs can be >>>> pre-loaded also. So you can be absolutely sure how every component in >>>> the FPGA comes up after power-up, without having to use a reset signal. >>>> >>>> You can forget about the resources the global reset signal needs, >>>> pipelining or how to code it properly because it plain and simple is >>>> useless and unnecessary in most cases.* >>>> If you need to set FFs or so to specific values after power-up, then >>>> that's a set, not a reset. Different port on the FF, different >>>> scenario, >>>> and certainly needed in a lot less occasions/signals, hence a signal >>>> with much smaller fanout. >>>> >>>> * = That's their claim, not necessarily my personal view... >>> >>> I don't believe Xilinx or any other FPGA vendor makes that claim. >> >> >> It seems they do (at least Ken Chapman does) make that claim. >> >> Xilinx WP272: >> "applying a global reset to your FPGA designs is not a very good >> idea and should be avoided" > >We are miscommunicating. I thought Sean was saying Xilinx was claiming >a proper reset was not needed. If so, I'd love to read the details on >how they justify that claim. Sean was saying the configuration reset is >adequate, which is not correct for most designs (which uses the GSR). >Yes, every FF is guaranteed to be set to a known state, but since the >max delay is typically greater than the clock cycle used, this signal >much be considered to be async with the clock which means you have to >code with this in mind. > >Since every Xilinx FPGA uses the GSR to put the chip in a defined state, >I'm not sure what Ken Chapman is really saying. If you use a global >set/reset signal in your design it will be replaced by the GSR signal, >so it is used by default whether or not you infer it. These conversations pop up about once a year or so both in this newgroup and in some of the vendor forums. And usually, like now, RAM, and Configuration "initialization values", and the GSR is brought up. But these don't help in (many) cases. First the simple case - as Rick points out and Xilinx admits, GSR is useless to depend on for reset. It's inactive edge is slow, and asynchronous. Might as well ignore it's existance right out - it can't work reliably on the inactive edge (the edge you care about). At the inactive edge of that reset, the conservative designer will assume all your FF's will enter an unknown state. With *CAREFUL* inspection, one may be able to make use of the GSR (i.e. making sure D=Q for some cycles around GSR), but this is tricky, and definitely the exception, not the rule in my designs. But this is often trumped more with the often errant assumption that "FPGA Configuration" == Reset. That's usually not true. Again, with *CAREFUL* inspection, perhaps some may make use of this. But again, for me it's the exception no the rule. Again, this also may be just a reflection on my design use; much of the work I do is reused on many FPGAs. So it's really hard for me, when coding up a FF deep within some logic to say "This FF will *always* be ok to not reset", in *ANY* place where the logic may be used, with certainty. It's just safer to reset the thing. I'd really like better tech and tools in the FPGA world to allow me to just reset everything. Logic is becoming cheap with these newest dense, FPGAs. Something like a clock tree, but for resets that has looser skew requirements than actual clock trees. Or better tools support to actually just create a pipelined resets as others have suggested. (BRAINSTRORMING) Or how about another "RESET_BUFG" type cells that tags along with every BUFG - but creates a pipelined (on the same clock as the BUFG) version of that reset. Fix the depth of the pipeline at some (reasonable) value. Costs some silcon, but systemically solves the problem for everyone.. Or some other (creative) systemic solutions. Fumbling about case-by-case (or customer-by-customer) isn't optimal. Regards, MarkArticle: 158990
On 5/31/2016 1:58 PM, Mark Curry wrote: > In article <nik4i8$fhm$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: >> On 5/31/2016 9:31 AM, Allan Herriman wrote: >>> On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: >>> >>>> On 5/31/2016 8:09 AM, Sean Durkin wrote: >>>>> Ilya Kalistru wrote: >>>>>> They advise it for a reason. In big and complex designs big reset >>>>>> network with high fanout dramatically decrease maximum achievable >>>>>> frequency. >>>>> >>>>> That's only part of the reason. The other part is that every FF, every >>>>> BRAM, every component of the FPGA is guaranteed by design to come up as >>>>> '0' at power up (after configuration is complete). So their claim is >>>>> that a reset (at least a global power-up reset) is simply unneccessary >>>>> and only maybe needed for things you do not wish to start up at '0' >>>>> (like, maybe a FSM state variable that dictates the initial state of an >>>>> FSM). And even in these cases it's not really needed, since the Xilinx >>>>> tools honor signal initialization values (in VHDL), and BRAMs can be >>>>> pre-loaded also. So you can be absolutely sure how every component in >>>>> the FPGA comes up after power-up, without having to use a reset signal. >>>>> >>>>> You can forget about the resources the global reset signal needs, >>>>> pipelining or how to code it properly because it plain and simple is >>>>> useless and unnecessary in most cases.* >>>>> If you need to set FFs or so to specific values after power-up, then >>>>> that's a set, not a reset. Different port on the FF, different >>>>> scenario, >>>>> and certainly needed in a lot less occasions/signals, hence a signal >>>>> with much smaller fanout. >>>>> >>>>> * = That's their claim, not necessarily my personal view... >>>> >>>> I don't believe Xilinx or any other FPGA vendor makes that claim. >>> >>> >>> It seems they do (at least Ken Chapman does) make that claim. >>> >>> Xilinx WP272: >>> "applying a global reset to your FPGA designs is not a very good >>> idea and should be avoided" >> >> We are miscommunicating. I thought Sean was saying Xilinx was claiming >> a proper reset was not needed. If so, I'd love to read the details on >> how they justify that claim. Sean was saying the configuration reset is >> adequate, which is not correct for most designs (which uses the GSR). >> Yes, every FF is guaranteed to be set to a known state, but since the >> max delay is typically greater than the clock cycle used, this signal >> much be considered to be async with the clock which means you have to >> code with this in mind. >> >> Since every Xilinx FPGA uses the GSR to put the chip in a defined state, >> I'm not sure what Ken Chapman is really saying. If you use a global >> set/reset signal in your design it will be replaced by the GSR signal, >> so it is used by default whether or not you infer it. > > These conversations pop up about once a year or so both in this > newgroup and in some of the vendor forums. And usually, like now, > RAM, and Configuration "initialization values", and the GSR is > brought up. > > But these don't help in (many) cases. > First the simple case - as Rick points out and Xilinx admits, > GSR is useless to depend on for reset. It's inactive edge is slow, > and asynchronous. Might as well ignore it's existance right out > - it can't work reliably on the inactive edge (the edge you care about). > At the inactive edge of that reset, the conservative designer will assume > all your FF's will enter an unknown state. That is the exact opposite of what I am saying. Rather than assuming all FFs go "crazy", I assume the reset is released asynchronously with the clock and so I need to control the release from reset of the few critical FFs that you can expect to change state. > With *CAREFUL* inspection, one may be able to make use of the GSR (i.e. making > sure D=Q for some cycles around GSR), but this is tricky, and definitely > the exception, not the rule in my designs. > > But this is often trumped more with the often errant assumption that > "FPGA Configuration" == Reset. That's usually not true. > Again, with *CAREFUL* inspection, perhaps some may make use of this. > But again, for me it's the exception no the rule. > > Again, this also may be just a reflection on my design use; much > of the work I do is reused on many FPGAs. So it's really hard for me, > when coding up a FF deep within some logic to say "This FF will *always* > be ok to not reset", in *ANY* place where the logic may be used, with > certainty. It's just safer to reset the thing. > > I'd really like better tech and tools in the FPGA world to allow me > to just reset everything. Logic is becoming cheap with these newest > dense, FPGAs. Something like a clock tree, but for resets that has looser > skew requirements than actual clock trees. Or better tools support to > actually just create a pipelined resets as others have suggested. > (BRAINSTRORMING) Or how about another "RESET_BUFG" type cells that > tags along with every BUFG - but creates a pipelined (on the same > clock as the BUFG) version of that reset. Fix the depth of the > pipeline at some (reasonable) value. Costs some silcon, but systemically > solves the problem for everyone.. > > Or some other (creative) systemic solutions. > > Fumbling about case-by-case (or customer-by-customer) isn't optimal. > > Regards, > > Mark > -- Rick CArticle: 158991
On Tue, 31 May 2016 09:45:42 -0400, rickman wrote: > On 5/31/2016 9:31 AM, Allan Herriman wrote: >> On Tue, 31 May 2016 08:53:27 -0400, rickman wrote: >> >>> On 5/31/2016 8:09 AM, Sean Durkin wrote: >>>> Ilya Kalistru wrote: >>>>> They advise it for a reason. In big and complex designs big reset >>>>> network with high fanout dramatically decrease maximum achievable >>>>> frequency. >>>> >>>> That's only part of the reason. The other part is that every FF, >>>> every BRAM, every component of the FPGA is guaranteed by design to >>>> come up as '0' at power up (after configuration is complete). So >>>> their claim is that a reset (at least a global power-up reset) is >>>> simply unneccessary and only maybe needed for things you do not wish >>>> to start up at '0' (like, maybe a FSM state variable that dictates >>>> the initial state of an FSM). And even in these cases it's not really >>>> needed, since the Xilinx tools honor signal initialization values (in >>>> VHDL), and BRAMs can be pre-loaded also. So you can be absolutely >>>> sure how every component in the FPGA comes up after power-up, without >>>> having to use a reset signal. >>>> >>>> You can forget about the resources the global reset signal needs, >>>> pipelining or how to code it properly because it plain and simple is >>>> useless and unnecessary in most cases.* >>>> If you need to set FFs or so to specific values after power-up, then >>>> that's a set, not a reset. Different port on the FF, different >>>> scenario, >>>> and certainly needed in a lot less occasions/signals, hence a signal >>>> with much smaller fanout. >>>> >>>> * = That's their claim, not necessarily my personal view... >>> >>> I don't believe Xilinx or any other FPGA vendor makes that claim. >> >> >> It seems they do (at least Ken Chapman does) make that claim. >> >> Xilinx WP272: >> "applying a global reset to your FPGA designs is not a very good idea >> and should be avoided" > > We are miscommunicating. I thought Sean was saying Xilinx was claiming > a proper reset was not needed. If so, I'd love to read the details on > how they justify that claim. Sean was saying the configuration reset is > adequate, which is not correct for most designs (which uses the GSR). > Yes, every FF is guaranteed to be set to a known state, but since the > max delay is typically greater than the clock cycle used, this signal > much be considered to be async with the clock which means you have to > code with this in mind. > > Since every Xilinx FPGA uses the GSR to put the chip in a defined state, > I'm not sure what Ken Chapman is really saying. He was pretty clear about what he was saying in that white paper. (Did you read and understand it?) Not saying I agree with it all, particularly the parts that are wrong. I notice that some Ken Chapman designs (e.g. picoblaze) have interesting failure modes associated with reset (e.g. don't include code to work around Xilinx AR# 42571 "block RAM contents can be corrupted even if the write enables are low"), so I wouldn't blindly trust everything he says. > If you use a global > set/reset signal in your design it will be replaced by the GSR signal, > so it is used by default whether or not you infer it. How long has it been since you've used a Xilinx part? I don't believe your statement about async resets being replaced by GSR has been true for the last six generations of device (Virtex 4 onward). If you code an async reset in your HDL, an async reset ends up in the chip, and this may lock out other CLB features (e.g. sync reset) which might otherwise save a LUT. The last time I checked, a connection to the GSR input on the startup block did *not* cause the CLB async reset connection to be removed by the tools. (At various times in the past this has worked on older Xilinx FPGA families though e.g. Virtex 2.) Regards, AllanArticle: 158992
rickman wrote: > Since every Xilinx FPGA uses the GSR to put the chip in a defined state, > I'm not sure what Ken Chapman is really saying. If you use a global > set/reset signal in your design it will be replaced by the GSR signal, > so it is used by default whether or not you infer it. According to Xilinx, the GSR in (at least modern) Xilinx FPGAs is NOT a physical, global signal in hardware that resets everything after configuration to a defined state.* It is just a signal that enables the configuration that was just loaded, i.e. it makes the FPGA start ticking with everything already being set. So, e.g. FFs are set to a specific value BEFORE this "pseudo global reset" even occurs, the logic is just not active yet; so when the reset is released and the logic starts running, everything is already in a known state. The case that e.g. an FSM gets confused because one of it's inputs isn't initialized yet when it starts running cannot occur. Still, as you mentioned, you can of course run into problems if you need to make sure stuff starts running at the exact same clock cycle (which for me personally, having to deal with a multitude of clock domains most of the time, is something my designs can never rely on anyway). What the Xilinx FAEs say (and again, this does not necessarily reflect my personal opinion, I'm just quoting here): - no need for a global reset that resets EVERYTHING to a known state, configuration will take care of that - for your design, regional, synchronous resets (or at least de-asserted synchronously) might be useful that only reset small areas of the design; so if you need to make sure that in a specific portion of the design everything starts up at the exact same clock cycle, you should generate a local synchronous reset and selectively connect that to where it's needed, but not more. The example they always bring up is algorithmic pipelines, which make up a good portion of a lot of designs; no need to reset those, there'll be garbage coming out at the end anyway until the pipeline is full. - One of them told me (and I'm not kidding, just quoting here): Relying on a global reset signal to get your design into a known state sometime after power-up is a sign of bad design. You should make sure your designs can recover. So, to sum up: they say you should use local, synchronous resets, not a global one, and you should consider not using a reset signal at all for most parts of the design, since in their opinion it's a completely unnecessary waste of resources in many cases. Be that as it may, I find I often resort to using resets to make sure the tools don't optimize e.g. pipeline stages or synchronizer flipflops into SRL components (which are useless for timing). That's easier than figuring out which attribute/constraint you have to apply to which signal/component at what design stage to disable SRL optimization for this specific set of registers... * Just googled and found this: https://forums.xilinx.com/t5/Archived-ISE-issues-Archived/FPGA-Power-On-Reset/m-p/7027?query.id=134602#M2035 (see Gabors answer)Article: 158993
On 6/1/2016 2:08 AM, Sean Durkin wrote: > rickman wrote: >> Since every Xilinx FPGA uses the GSR to put the chip in a defined state, >> I'm not sure what Ken Chapman is really saying. If you use a global >> set/reset signal in your design it will be replaced by the GSR signal, >> so it is used by default whether or not you infer it. > > According to Xilinx, the GSR in (at least modern) Xilinx FPGAs is NOT a > physical, global signal in hardware that resets everything after > configuration to a defined state.* > It is just a signal that enables the configuration that was just loaded, > i.e. it makes the FPGA start ticking with everything already being set. > So, e.g. FFs are set to a specific value BEFORE this "pseudo global > reset" even occurs, the logic is just not active yet; so when the reset > is released and the logic starts running, everything is already in a > known state. The case that e.g. an FSM gets confused because one of it's > inputs isn't initialized yet when it starts running cannot occur. Think about what you wrote. If there is no physical signal, how does the release of reset get communicated across the entire chip? It is a signal, it just isn't a *fast* signal with global buffers, etc like a clock line. That's why they include a GSR component in the design tools. > Still, as you mentioned, you can of course run into problems if you need > to make sure stuff starts running at the exact same clock cycle (which > for me personally, having to deal with a multitude of clock domains most > of the time, is something my designs can never rely on anyway). > > What the Xilinx FAEs say (and again, this does not necessarily reflect > my personal opinion, I'm just quoting here): > - no need for a global reset that resets EVERYTHING to a known state, > configuration will take care of that In Lattice parts it is a lot easier to use configuration as a reset because the configure so much more quickly from Flash. In a Xilinx part re-configuration can be too slow to be practical when resetting the design. The GSR can be used as an external reset. > - for your design, regional, synchronous resets (or at least de-asserted > synchronously) might be useful that only reset small areas of the > design; so if you need to make sure that in a specific portion of the > design everything starts up at the exact same clock cycle, you should > generate a local synchronous reset and selectively connect that to where > it's needed, but not more. The example they always bring up is > algorithmic pipelines, which make up a good portion of a lot of designs; > no need to reset those, there'll be garbage coming out at the end anyway > until the pipeline is full. > - One of them told me (and I'm not kidding, just quoting here): Relying > on a global reset signal to get your design into a known state sometime > after power-up is a sign of bad design. You should make sure your > designs can recover. I wouldn't argue about that. It greatly depends on the design. > So, to sum up: they say you should use local, synchronous resets, not a > global one, and you should consider not using a reset signal at all for > most parts of the design, since in their opinion it's a completely > unnecessary waste of resources in many cases. That is essentially what I am saying, but how do you kick off the local synchronous resets? That requires the asynchronous GSR. I don't look at the local reset as a local reset. I just design the circuit to work with the async GSR. It is not uncommon for a circuit to work properly without any local reset (for simple ones mostly) or just a single FF to allow the circuit to remain in reset for a clock after GSR is released. > Be that as it may, I find I often resort to using resets to make sure > the tools don't optimize e.g. pipeline stages or synchronizer flipflops > into SRL components (which are useless for timing). That's easier than > figuring out which attribute/constraint you have to apply to which > signal/component at what design stage to disable SRL optimization for > this specific set of registers... > > * Just googled and found this: > https://forums.xilinx.com/t5/Archived-ISE-issues-Archived/FPGA-Power-On-Reset/m-p/7027?query.id=134602#M2035 > (see Gabors answer) > -- Rick CArticle: 158994
In article <dr7cdkFhkh0U1@mid.individual.net>, Sean Durkin <news_MONTH@tuxroot.de> wrote: >rickman wrote: >> Since every Xilinx FPGA uses the GSR to put the chip in a defined state, >> I'm not sure what Ken Chapman is really saying. If you use a global >> set/reset signal in your design it will be replaced by the GSR signal, >> so it is used by default whether or not you infer it. > >So, to sum up: they say you should use local, synchronous resets, not a >global one, and you should consider not using a reset signal at all for >most parts of the design, since in their opinion it's a completely >unnecessary waste of resources in many cases. The "global" vs. "local" reset thing is also a silly guideline from Xilinx. The signals have the same definition to the synthesizer => the synthesizer will merge all the "local" resets back into a "global" reset. We code all of our resets globally in a "clocks and resets block". In that block we synchronize the inactive edge of reset, and send it out globally. There's every reason to do this once, globally, instead of forcing each designer to repeat the same thing locally on every block. Exactly contrary to the guideline. The original idea had some sense, pre-synthesizers. That's long past. Regards, MarkArticle: 158995
On 1.6.2016 16:04, rickman wrote: > On 6/1/2016 2:08 AM, Sean Durkin wrote: >> According to Xilinx, the GSR in (at least modern) Xilinx FPGAs is NOT a >> physical, global signal in hardware that resets everything after >> configuration to a defined state.* >> It is just a signal that enables the configuration that was just loaded, >> i.e. it makes the FPGA start ticking with everything already being set. >> So, e.g. FFs are set to a specific value BEFORE this "pseudo global >> reset" even occurs, the logic is just not active yet; so when the reset >> is released and the logic starts running, everything is already in a >> known state. The case that e.g. an FSM gets confused because one of it's >> inputs isn't initialized yet when it starts running cannot occur. > > Think about what you wrote. If there is no physical signal, how does > the release of reset get communicated across the entire chip? It is a > signal, it just isn't a *fast* signal with global buffers, etc like a > clock line. That's why they include a GSR component in the design tools. How the reset is implemented in fabric is not very straightforward thing. There is the "after configuration state", then reset lines can be lifted into clock networks which in Ultrascale for example look like the old good ASIC clock trees etc. You can get the restet to work with pretty high frequencies, but that can lead into routing congestion. Also some of the FPGA features cannot be used if FFs have resets (internal DSP, BRAM pipeline registers etc.). Also having traditional ASIC resets sometimes creates interesting logic to emulate it. And not all structures can directly connect into the clock (read reset) trees and there are limitations how many trees can connect to where. Removing reset in data-pipeline and leaving it only to control part improves timing and congestion dramatically in some designs. I have seen many nanoseconds off by just removing unnecessary resets which enabled much better packing as control sets got smaller. There is one good chapter on this in Ultrascale training materials, it is good reading. They also suggest interesting ways to use clock gating features to change the reset to multicycle path etc. --KimArticle: 158996
On Tuesday, May 31, 2016 at 12:51:13 PM UTC-4, Rick C. Hodgin wrote: > I do like the Lattice Diamond software. It's awesome actually. Very fast. I received the Lattice board today. I'll begin working on something this weekend. Looking forward to getting a basic circuit to cycle through the on-board LEDs when on-board buttons are clicked. Best regards, Rick C. HodginArticle: 158997
On Friday, June 3, 2016 at 12:00:52 PM UTC-4, Rick C. Hodgin wrote: > On Tuesday, May 31, 2016 at 12:51:13 PM UTC-4, Rick C. Hodgin wrote: > > I do like the Lattice Diamond software. It's awesome actually. Very fast. > I received the Lattice board today. I'll begin working on something this > weekend. Looking forward to getting a basic circuit to cycle through the > on-board LEDs when on-board buttons are clicked. Received: http://www.latticesemi.com/en/Products/DevelopmentBoardsAndKits/LatticeXP2Brevia2DevelopmentKit.aspx I am wanting to apply logic to this LED process, but I'm thinking there may be some analog issues that I need to consider. For example, when a button on the board is clicked, I assume there is some jitter time, such that if it were sampled at a MHz frequency it would record jittery on/off signals for a ms or two until the contact was made solid, and the same for releasing. As such, any logic which samples the buttons, for example, must include things like identifying the first high signal, and then either sampling the high/low ratio over periods of time to determine if it's still high or low, and then using that value after the sampling period has expired, or wait until the high signal persists solidly for something like 10ms, and then consider that to be a single press event, and then wait for it to go low again for something like 10ms before concluding it is actually a release event. Sound about right? Or, do boards like this automatically handle that for you so you have a direct digital input that has already sampled out those peculiarities in some way, so you have a solid press and release events when they switch from high to low, and low to high? ----- Also, I am thinking about hooking up a speaker to make sound. I am thinking there is no way to control volume using this method, but only the frequency, short of signaling out through multiple sets of pins which are wired to a single speaker, such that pins 0 are full voltage, pins 1 are slightly resisted, pins 2 are slightly more resisted, pins 3 are heavily resisted, and pins 4 are almost completely resisted, such that whenever a volume setting is made, it drives pins 0,1,2,3,4 or none, as needed. Sound about right? Best regards, Rick C. HodginArticle: 158998
On 6/3/2016 1:09 PM, Rick C. Hodgin wrote: > On Friday, June 3, 2016 at 12:00:52 PM UTC-4, Rick C. Hodgin wrote: > > Received: > > http://www.latticesemi.com/en/Products/DevelopmentBoardsAndKits/LatticeXP2Brevia2DevelopmentKit.aspx > > I am wanting to apply logic to this LED process, but I'm thinking there may > be some analog issues that I need to consider. For example, when a button > on the board is clicked, I assume there is some jitter time, such that if it > were sampled at a MHz frequency it would record jittery on/off signals for > a ms or two until the contact was made solid, and the same for releasing. > > As such, any logic which samples the buttons, for example, must include > things like identifying the first high signal, and then either sampling > the high/low ratio over periods of time to determine if it's still high > or low, and then using that value after the sampling period has expired, > or wait until the high signal persists solidly for something like 10ms, > and then consider that to be a single press event, and then wait for it > to go low again for something like 10ms before concluding it is actually > a release event. > > > Best regards, > Rick C. Hodgin > I like that board, you can implement simple CPUs in it for a low cost. This is a link to a video of FPGAs, this one is lesson 1, on lesson 2 they take a button and debounce it, the purpose of the project is to count a series of pulses on the LED based on pushing a button. -- Cecil - k5nwaArticle: 158999
On Friday, June 3, 2016 at 3:02:24 PM UTC-4, Cecil Bayona wrote: > On 6/3/2016 1:09 PM, Rick C. Hodgin wrote: > > On Friday, June 3, 2016 at 12:00:52 PM UTC-4, Rick C. Hodgin wrote: > > Received: > > http://www.latticesemi.com/en/Products/DevelopmentBoardsAndKits/LatticeXP2Brevia2DevelopmentKit.aspx > > > > I am wanting to apply logic to this LED process, but I'm thinking there may > > be some analog issues that I need to consider. For example, when a button > > on the board is clicked, I assume there is some jitter time, such that if it > > were sampled at a MHz frequency it would record jittery on/off signals for > > a ms or two until the contact was made solid, and the same for releasing. > > > > As such, any logic which samples the buttons, for example, must include > > things like identifying the first high signal, and then either sampling > > the high/low ratio over periods of time to determine if it's still high > > or low, and then using that value after the sampling period has expired, > > or wait until the high signal persists solidly for something like 10ms, > > and then consider that to be a single press event, and then wait for it > > to go low again for something like 10ms before concluding it is actually > > a release event. > > I like that board, you can implement simple CPUs in it for a low cost. > > This is a link to a video of FPGAs, this one is lesson 1, on lesson 2 > they take a button and debounce it, the purpose of the project is to > count a series of pulses on the LED based on pushing a button. > > -- > Cecil - k5nwa Specs: http://www.latticesemi.com/~/media/LatticeSemi/Documents/UserManuals/JL/LatticeXP2Brevia2DevelopmentKitUsersGuide.PDF?document_id=43735 In looking at the specs, it shows that all five momentary push buttons are already debounced, and 3 of the 4 DIP switches are (4 to 5 is not, pin 55). Awesome. Should make it easier. What are pins 142, 143, 144 (SRAM_CSb, SRAM_OEb, SRAM_WEb). I assume output enable and write enable? But what is CS? Some kind of strobe? Do the address and data pins go high, and write goes high, and then it's strobed before it actually writes? Or address pins go high, and then output goes high, and then it's strobed before data pins are ready? Or does CS signal when the operation is complete after OE or WE go high? Or are they something else entirely? :-) Best regards, Rick C. Hodgin
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z