Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Thursday, December 11, 2014 5:11:44 PM UTC-5, Tim Wescott wrote: > On Thu, 11 Dec 2014 11:55:14 -0800, Rick C. Hodgin wrote: > > > Would it be possible to connect an FPGA up to an 80386 (or other) CPU, > > to respond to memory and port requests, and leverage it as a resource? > > > > I'm thinking software runs on the 80386, given it by the FPGA, > > instructing it as a type of co- processor, which does things on command. > > > > I see voltage differences as an issue. > > > > Best regards, > > Rick C. Hodgin > > Are you doing this to have fun playing with obsolete processors, or do you > have a job to do? > > If it's the former -- have fun. > > If it's the latter -- you are aware that there are all sorts of far more > modern solutions to this general problem than the one you're proposing, > yes? It's a mental exercise right now. I have never considered anything about hardware in my past. I'm a software guy, and always thought of hardware as being outside of my reach. However, a short time ago I was introduced to Verilog and the FPGA. Since then I've had this flood of ideas on how things might work ... so, I ask questions. :-) I do have an old 80386-16 MHz ceramic CPU (several actually), along with 80486 through Pentium various models (some of which are 3.3V), along with AMD K5, etc. I was basically just thinking through the process. Best regards, Rick C. HodginArticle: 157476
On 2014-12-10 hvo wrote in comp.arch.fpga: > Hello, > > I know this topic is beaten to death but I am a bit unlcear some things. > > I've recently encountered metastability issues that caused my FPGA to do > unpredictable things. Someone suggested that I synchronize my inputs to > the clock domain and that seemed to solve the issue. Googling this topic > showed that a two stage Flip Flop is sufficient to increase MTBF for > metastability. My question is do I need to do this for all input signals? > How would one do this with a design containing 30 to 40 input signals? > Which types of inputs can I get away with not using two stage FF? But why would you want to save a few FFs? Are you running short? Or is the added delay too much for you? Adding the FFs can also help the tools place your design more easily. Okay my current design is in a luxury position. I needed the Xilinx Zynq for it's dual core processor and a little bit of FPGA. I'm left with a huge amount of unused FFs even after using triple synchronizers on all inputs. -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) Children are like cats, they can tell when you don't like them. That's when they come over and violate your body space.Article: 157477
On Thursday, December 4, 2014 6:36:37 PM UTC-5, Theo Markettos wrote: > Rick C. Hodgin <rick.c.hodgin@gmail.com> wrote: > > If anyone has advice on how to get communication running most easily > > on this board, I would appreciate it. Thank you in advance. > > There are two Altera components that might be useful: > > The JTAG UART is something that looks like a UART device, but runs via JTAG > which is connected to your PC using USB. That means it's very simple to get > a text terminal up from your FPGA. It isn't a 16550-style UART, it has a > somewhat simpler interface (and if you're using a NIOS-II processor Altera's > tools generate libraries so that printf() etc works). That means you don't > need any extra hardware to get a serial port - you just run 'nios2-terminal' > on your PC and you get a console of whatever comes out of the JTAG UART. > > System Console is an Altera (Java) app that allows you to get debug access > to your FPGA, assuming your FPGA uses AXI or Altera's Avalon interconnect, > which can be built with Altera's Qsys GUI tool for building systems-on-chip. > Your components have AXI or Avalon interfaces, and you join them together in > Qsys GUI (wiring up buses, interrupts, setting addresses, etc). Qsys > synthesises a network on chip for you that implements the interconnect you > wanted (so the 'buses' are actually packet switched networks). Once you've > done that you can just drop in a debug module that allows access to those > buses from System Console via JTAG via USB. In System Console on your PC > you can write TCL scripts to access memory, change peripheral registers, > etc. Since it's plugged into your existing interconnect it can access > whatever is connected to it. > > In our case we have both System Console and a CPU debug unit. The debug > unit is inside the CPU and allows insertion of instructions into the > pipeline, which means we can force it to execute code to set registers etc. > We use both that mechanism and System Console to access memory. The debug > unit is implemented using a JTAG UART, but using it as a pipe to convey > debug instructions rather than as a text terminal (we have another JTAG UART > as the console). > > > I'd suggest first taking the example projects supplied with the board, which > use the NIOS-II CPU, and making yourself familiar with the toolchain: > Quartus compilation > Megawizard [1] > Qsys (system-on-chip generator) > NIOS-II bare-metal software world (Eclipse editor, C compiler, Board Support > Package (BSP) of drivers for the FPGA hardware) > Programming > > and only then go 'off piste'. For an example, this is the FPGA practical > course that I teach that goes through the basics (from no HDL experience to > building a heterogenous multicore system-on-chip in 24 hours of lab time): > http://www.cl.cam.ac.uk/teaching/1415/ECAD+Arch/labs/ > You don't have the same board but many of the concepts should still apply. > > Theo > > [1] Megawizard (a tool for configuring standalone IP blocks such as PLLs) is > being phased out and rolled into Qsys, but it's still relevant at the moment This is where I'm at right now. Am I able to do everything I need from Linux? Or do I need to setup a Windows partition? I tried to run the Control Panel software which came with the board in WINE. It launched, but said I needed to have Quartus installed for it to work properly. I don't particularly want to install Quartus for Windows in WINE as it's pretty hefty. :-) Quartus for Linux is working just fine, as is Qsys in Linux. The "NIOS-II bare-metal software world" you mention ... what is it? And why Eclipse? Can I use Netbeans? :-) And how does the C compiler enter into it? And what is the BSP for the FPGA hardware? I was able to install my board in Quartus in Linux, and Qsys was able to prepare for it. I've found a few Altera videos on getting started, but the host goes through several settings too quickly and does not go into details as to why he's chosen what he's chosen. Would anyone be available to get on chat with me some evening or weekend and help get me kickstarted? Thank you in advance. Best regards, Rick C. HodginArticle: 157478
Basically here are my goals: (1) Altera examples (get them working correctly, learn the toolset). (2) Create or find a simple CPU and get it working correctly. (3) Using that simple CPU, add hardware support for my Ethernet board, and (4) (on the host) write software to read data from the Ethernet packets. And finally: (5) Begin working on my LibSF 386-x40 CPU in Oppie-1 through Oppie-6 stages. Best regards, Rick C. HodginArticle: 157479
Tim Wescott <seemywebsite@myfooter.really> wrote: > On Wed, 10 Dec 2014 14:21:01 -0500, rickman wrote: >> On 12/10/2014 1:05 PM, Tim Wescott wrote: >>> On Wed, 10 Dec 2014 02:17:59 -0500, rickman wrote: (snip on metastability and slack time) >>> If you clock a FF at time t, and you don't use the result until time t >>> + Ts (Ts being your clock period), then the FF has that whole Ts-long >>> period to resolve the metastability one way or another. That part will >>> be exponential with Ts. >> You are forgetting about the time required for the signal to propagate >> to the output, through the logic, and the setup time for the next FF. >> These times all need to be subtracted from the clock cycle time yielding >> the slack time. This is the only number important to resolving >> metastability. > I suspect that the paper (which doesn't sound very thorough) is > presupposing that you take a design with a given propagation delay, and > just start turning the frequency down on the clock. Much easier than increasing the propagation delay. If your clock is 1MHz, it takes a lot of logic and routing to make a significant fraction of that in propagation delay. -- glenArticle: 157480
Rick C. Hodgin <rick.c.hodgin@gmail.com> wrote: > Would it be possible to connect an FPGA up to > an 80386 (or other) CPU, to respond to memory > and port requests, and leverage it as a resource? The 80286 was designed to work with the 80287 or 80387 math coprocessor. That interface is a little less general than the 8086, but it could be possible that way. Otherwise, yes, you could do it as an ordinary I/O device on memory or I/O space. > I'm thinking software runs on the 80386, given > it by the FPGA, instructing it as a type of co- > processor, which does things on command. > I see voltage differences as an issue. I think the 386 has TTL level I/O pins. There are a few different ways to do that. One is a resistor big enough not to hurt the input when the protection diode conducts. -- glenArticle: 157481
On 12/11/2014 9:08 AM, GaborSzakacs wrote: > hvo wrote: >> I think by popular opinion, my issue is not metastability but rather >> clock >> domain crossing as many have pointed out. This explains why adding a >> single synchronizing FF fixed my issue as Kevin pointed out. Also, an >> interesting point on the conclusion of Xilinx's XAPP094 stating that >> "Modern CMOS circuits are so fast that this metastable delay can >> safely be >> ignored for clock rates below 200 MHz." This also support why I Also >> don't >> think its a metastable problem since my CLK rate is 20MHz. >> >> I guess what I am taking out of all this is that not all signals need >> to be >> synchronized with a FF. Only those who fan out to multiple processes >> that >> are time aligned. This ensures two identical process to have the same >> output given the same input. >> Now the synchronized FF output can be metastable, in which case a >> second FF >> will reduce it probability significantly. >> >> Am I on the right path? or completely out on left field? >> >> PS: is there a way to attach a picture in this forum? >> Thanks >> HV. >> --------------------------------------- >> Posted through http://www.FPGARelated.com > > Your original problem was most definitely *not* metastability. > However mitigating the probability of metastability is still > worth while. It's important to understand the mechanisms involved. > From a simple perspective, you can consider that any flip-flop > has a "window" near the sampling clock edge where metastability > can happen. For modern CMOS, that window is very small, probably > less than 1 ps. In any case it's *much* smaller than the window > you normally try to stay out of between setup and hold when using > synchronous logic. > > The chances of getting a metastable event at the first flip-flop > when introducing an asynchronous signal is simply the probability > that an edge of the incoming signal falls within this metastability > window. Note that the expected failure rate is related to both > the clock rate, which determines how often in time a window is > "open" and the edge rate of the incoming signal. I agree with everything you've said up to this point. > Now we come to why you want a second flip-flop. A metastable > event has the effect of increasing the clock to output timing > of the first flip-flop. There is theoretically no upper bound > on the amount of time that the event can last, but the chances > of the event lasting any particular length of time go down > *very* quickly as the length of time goes up. In real world > applications, there are secondary processes (mostly system > noise) that "help" an event to end in a way similar to a > coin standing on edge on a bar where there are a lot of > patrons picking up and setting down mugs. In any case you > can see that you want "slack" time in the path from this > first flip-flop to all other synchronous elements. This all sounds right except the part about the noise. Is there a way to prove that? I have never heard an analysis that proves that noise has any real impact on the rate of resolution of metastability. I'm not saying this is wrong, I just haven't seen anything like a proof. > The second flop is an easy way to ensure the ease of adding a > lot of slack in the path. However it has a secondary impact > on the chance of failure. When the first flop has an event > that increases its time such that all subsequent flops no > longer meet setup requirements, your circuit will fail. With > the second flip-flop in place, instead of having an upper > bound after which the circuit will fail, what you need for > failure is an event that causes the second flip-flop to go > metastable. This means that instead of the probability of > an event being greater than "x" you now are looking at the > probability of an event being exactly "x" +/- something > very small. So even if the first path doesn't have the > slack to prevent a metastable event from violating the > setup/hold of the second flop, the system won't actually > fail unless the event is within a very small range. This > dramatically reduces the MTBF. I don't follow this at all. The same reasoning applies to one case as the other and any metastability resolution time of the first FF greater than "x" will make the second FF go metastable. However, not the second FF also has a chance of resolving before time "y" before impacting functional circuitry. The problem is mitigated both by the fact that both FFs have to persist in metastable state as well as the fact that the slack time for the "empty" path is typically *much* longer than needed to resolve the metastable event and *not* propagate it to the second FF other than as an exceedingly rare event (billions of operating years). > Now deciding whether you really need a second flop depends > on requirements for MTBF and the amount of slack you can > give between the first flop and all of its loads. At a low > clock frequency it's likely that you can ensure enough slack > that you don't need the second flop to meet the MTBF requirements. > A slower clock also means that you add more delay by inserting > another flop. If latency is an issue, you probably don't want > to do that. It's a bit counterintuitive, but in this case you > could actually improve MTBF without adding delay by using a > second flop on the opposite clock edge, assuming you can meet > timing to subsequent flops in 1/2 clock period. I don't agree. You are better off with one looooong period than two short ones. Part of the reason is that the added FF has delays that subtract from the slack time, but more importantly the exponential term is multiplicative, er... exponential actually, while having two delays is just additive. e^2n >> 2*e^n -- RickArticle: 157482
On 12/11/2014 6:59 PM, Stef wrote: > On 2014-12-10 hvo wrote in comp.arch.fpga: >> Hello, >> >> I know this topic is beaten to death but I am a bit unlcear some things. >> >> I've recently encountered metastability issues that caused my FPGA to do >> unpredictable things. Someone suggested that I synchronize my inputs to >> the clock domain and that seemed to solve the issue. Googling this topic >> showed that a two stage Flip Flop is sufficient to increase MTBF for >> metastability. My question is do I need to do this for all input signals? >> How would one do this with a design containing 30 to 40 input signals? >> Which types of inputs can I get away with not using two stage FF? > > But why would you want to save a few FFs? Are you running short? Or is > the added delay too much for you? Adding the FFs can also help the tools > place your design more easily. > > Okay my current design is in a luxury position. I needed the Xilinx > Zynq for it's dual core processor and a little bit of FPGA. I'm left > with a huge amount of unused FFs even after using triple synchronizers > on all inputs. Triple!!! You are aware that using synchronizers in odd numbers puts the metastability back into the circuit, right? April Fools! Er, December Fools. -- RickArticle: 157483
On 12/11/2014 5:21 PM, Rick C. Hodgin wrote: > Tim Wescott wrote: >> "Support" as in it will let you power a GPIO >> block from 5V? Or does it just mean that a >> 3.3V GPIO pin is 5V tolerant? > > It says you can assign 5V to each pin, which I > assume means true I/O. I would use externally > regulated 5V power supply, as from an old case, > for all CPU Vcc inputs. The GPIO would only > power data and a handful of switches. > >> If the latter, then you need to make sure that >> the '386 has TTL-level inputs (V_HI < 3V or so). > > Definitely a 5V part circa 1986. Ceramic 16 MHz > package. I suggest you dig harder on this issue. It has been a long time since FPGAs were 5 volt tolerant. The only way I know to make the FPGA 5 volt tolerant is to either use a current limiting resistor on the input pin (a very poor solution because the pin capacitance and the series resistance create a low pass filter that prevents any fast changes) or level shifters which come in many forms. I use Quick Switch shifters on my current design. They are really just analog switches with Vcc driven by 5 volts with an internal diode drop. This puts the gate drive at a voltage so that nothing over about 3.3 volts will pass though the switch. So it clamps the 5 volt side to 3.3 volts when connecting to your FPGA I/Os. Since 5 volt TTL type logic levels are compatible with 3.3 volt TTL logic levels all is good. The delay through the switches is a lot less than the prop delays through a logic gate so they are faster too. Check to see if they are using some type of voltage conversion devices between the FGPA and the I/O pins. It would be very unusual if they do. Otherwise the FPGA I/Os are designed for multiple voltage standards up to 3.3 volts. -- RickArticle: 157484
On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: > Tim Wescott wrote: >> "Support" as in it will let you power a GPIO block from 5V? Or does it >> just mean that a 3.3V GPIO pin is 5V tolerant? > > It says you can assign 5V to each pin, which I assume means true I/O. I > would use externally regulated 5V power supply, as from an old case, > for all CPU Vcc inputs. The GPIO would only power data and a handful of > switches. You're not answering my question. "Supports 5V" can mean one of two things, both of which meanings I've stated. "Assign 5V to each pin" essentially carries the same two meanings. Can you power the FPGA GPIO from 5V, so that you get 5V out? Or can it just stand to get 5V on an input without the FPGA operating incorrectly? >> If the latter, then you need to make sure that the '386 has TTL-level >> inputs (V_HI < 3V or so). > > Definitely a 5V part circa 1986. Ceramic 16 MHz package. Great. Wonderful. 5V logic comes in a lot of different flavors. For the purposes of our discussion, you can break it into the set of all parts that need a full 5V on a logic input to correctly interpret the answer as a '1', and parts that can get by with 3.3V. So which is it? -- Tim Wescott Wescott Design Services http://www.wescottdesign.comArticle: 157485
On 12/12/2014 2:20 AM, Tim Wescott wrote: > On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: > >> Tim Wescott wrote: >>> "Support" as in it will let you power a GPIO block from 5V? Or does it >>> just mean that a 3.3V GPIO pin is 5V tolerant? >> >> It says you can assign 5V to each pin, which I assume means true I/O. I >> would use externally regulated 5V power supply, as from an old case, >> for all CPU Vcc inputs. The GPIO would only power data and a handful of >> switches. > > You're not answering my question. "Supports 5V" can mean one of two > things, both of which meanings I've stated. "Assign 5V to each pin" > essentially carries the same two meanings. > > Can you power the FPGA GPIO from 5V, so that you get 5V out? Or can it > just stand to get 5V on an input without the FPGA operating incorrectly? Not directly from the FPGA. He said he is using the "Cyclone V GX Starter Kit". This board puts a 47 ohm resistor in series with the I/Os to the header along with Schotkey diodes to 3.3 volts and ground. If this is driven from 5 volt logic it may pop the diodes and the 47 ohm resistor won't protect the FPGA inputs. This board is *not* 5 volt tolerant. >>> If the latter, then you need to make sure that the '386 has TTL-level >>> inputs (V_HI < 3V or so). >> >> Definitely a 5V part circa 1986. Ceramic 16 MHz package. > > Great. Wonderful. 5V logic comes in a lot of different flavors. For the > purposes of our discussion, you can break it into the set of all parts > that need a full 5V on a logic input to correctly interpret the answer as > a '1', and parts that can get by with 3.3V. So which is it? If he is going to use the 386 he has to build a board anyway. So that board can contain 5 volt to 3.3 volt converters to make it all work properly. Interesting that the starter kit has Arduino compatible headers on it. I wonder if anyone has built an Arduino compatible CPU board with a 386, lol -- RickArticle: 157486
Rick C. Hodgin <rick.c.hodgin@gmail.com> wrote: > This is where I'm at right now. > > Am I able to do everything I need from Linux? Or do I need to setup a > Windows partition? Linux is fine. In fact, Linux is better than Windows in some ways. > I tried to run the Control Panel software which came with the board in > WINE. It launched, but said I needed to have Quartus installed for it > to work properly. I don't particularly want to install Quartus for > Windows in WINE as it's pretty hefty. :-) The System Builder software just generates some template Verilog files that can go into Quartus. I usually run it in WINE when it's necessary. Generally you just use that software to make a template once, and then everything happens in Quartus so you can forget about the System Builder. The Control Panel is just 'click a Windows button, and oh look the LED lights' kind of thing - handy for checking the board you bought works, but not that useful beyond your first 5 minutes. > Quartus for Linux is working just fine, as is Qsys in Linux. > > The "NIOS-II bare-metal software world" you mention ... what is it? > And why Eclipse? Can I use Netbeans? :-) NIOS-II Eclispe is a (very old) version of Eclipse with some of the NIOS tools integrated. That means there's menu options for making sample projects, regenerating drivers, etc. It quite handy as a means of learning what the tools do. Now I understand how it works I generally drive all of that from Makefiles because there's less IDE magic in between me and the build process. So I'd suggest using Eclipse just to get familiar with the tools then you can do what you want after. > And how does the C compiler enter into it? And what is the BSP for > the FPGA hardware? I was able to install my board in Quartus in Linux, > and Qsys was able to prepare for it. A lot of the software-land tools assume you're going to put a NIOS-II processor on your FPGA: Qsys (the network-on-chip builder) -> BSP builder (generates a pile of C code drivers for the Altera IP on your FPGA) -> NIOS II C compiler -> download ELF to FPGA -> terminal to monitor your software If you're just doing FPGA-hard-stuff you don't have any software and you can ignore this. If you don't have a NIOS the BSP, C and ELF steps are irrelevant, however it's worth understanding the tool flow... the best revolutionary understands the system before they undermine it ;) The NIOS environment is also handy as a reasonably lightweight processor to have on hand when you want to do things that are easy in software and a pain in hardware (like 'printf'). This comes up surprisingly often - it's just another tool in the box. > I've found a few Altera videos on getting started, but the host goes > through several settings too quickly and does not go into details as > to why he's chosen what he's chosen. I'd start with the demos that come with your board: test them, then see if you can rebuild them and they still work, then extend them. In the FPGA world there are so many finicky details that, if you get them wrong, will stop your project working - so it's worth starting with an existing working project and building on that. TheoArticle: 157487
Rick C. Hodgin <rick.c.hodgin@gmail.com> wrote: > And finally: > (5) Begin working on my LibSF 386-x40 CPU in Oppie-1 through Oppie-6 stages. On that one, I'd strongly suggest a preliminary: Write or otherwise obtain a testsuite. This has been absolutely critical to getting our CPU off the ground and keeping it there. When you have a testsuite, run it against each commit of your CPU code. Particularly when you get to more complex situations (MMU, interrupts, exceptions, concurrency, booting a real OS), the testsuite becomes invaluable to stamping out bugs in your design. TheoArticle: 157488
On Fri, 12 Dec 2014 03:13:50 -0500, rickman wrote: > On 12/12/2014 2:20 AM, Tim Wescott wrote: >> On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: >> >>> Tim Wescott wrote: >>>> "Support" as in it will let you power a GPIO block from 5V? Or does >>>> it just mean that a 3.3V GPIO pin is 5V tolerant? >>> >>> It says you can assign 5V to each pin, which I assume means true I/O. >>> I would use externally regulated 5V power supply, as from an old case, >>> for all CPU Vcc inputs. The GPIO would only power data and a handful >>> of switches. >> >> You're not answering my question. "Supports 5V" can mean one of two >> things, both of which meanings I've stated. "Assign 5V to each pin" >> essentially carries the same two meanings. >> >> Can you power the FPGA GPIO from 5V, so that you get 5V out? Or can it >> just stand to get 5V on an input without the FPGA operating >> incorrectly? > > Not directly from the FPGA. He said he is using the "Cyclone V GX > Starter Kit". This board puts a 47 ohm resistor in series with the I/Os > to the header along with Schotkey diodes to 3.3 volts and ground. If > this is driven from 5 volt logic it may pop the diodes and the 47 ohm > resistor won't protect the FPGA inputs. This board is *not* 5 volt > tolerant. > > >>>> If the latter, then you need to make sure that the '386 has TTL-level >>>> inputs (V_HI < 3V or so). >>> >>> Definitely a 5V part circa 1986. Ceramic 16 MHz package. >> >> Great. Wonderful. 5V logic comes in a lot of different flavors. For >> the purposes of our discussion, you can break it into the set of all >> parts that need a full 5V on a logic input to correctly interpret the >> answer as a '1', and parts that can get by with 3.3V. So which is it? > > If he is going to use the 386 he has to build a board anyway. So that > board can contain 5 volt to 3.3 volt converters to make it all work > properly. > I think the OP needs to learn a bit more about the wonderful world of logic levels (and Rick H. -- feel free to ask). My inclination is that if the guy needs to make a board anyway, he may as well put the FPGA on it, and not mess around trying to make a board to stick onto the starter kit. I don't know enough about the parts involved to know if, at that point, he can get away with putting them together without level translators or not. I suspect not -- I'll bet that 386 has 5V CMOS-level inputs, and that the Altera part can't be run with 5V on the I/O. But I am NOT going to take the time to dig through data sheets -- the OP can do that as part of his education in digital hardware design. -- Tim Wescott Wescott Design Services http://www.wescottdesign.comArticle: 157489
Rick C. Hodgin <rick.c.hodgin@gmail.com> wrote: > Is there a way to monitor signals in existing > wires? For example, with an oscilloscope and > probe I can watch voltage changes. Is there a > standard way to connect to an existing, working > device, and monitor and record its switching > over time? Such would seem to be desirable for > peeking at proprietary "wake up" chirps, and to > monitor device communications to establish its > protocol interface. Xilinx has ChipScope and Altera has SignalTap. These are logic analysers that you can put inside the FPGA and attach to signals to monitor the state of your design. There are some caveats: Typically changing the probe state requires a recompile of your design (which can take hours) The amount of state you can record is limited by the internal memory in your device but it's still far better than trying to route signals outside and using a real logic analyser. If you want better visibility and quicker turnaround you'll have to run things in simulation rather than on the FPGA. TheoArticle: 157490
On Friday, December 12, 2014 2:20:35 AM UTC-5, Tim Wescott wrote: > On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: > > It says you can assign 5V to each pin, which I assume means true I/O. I > > would use externally regulated 5V power supply, as from an old case, > > for all CPU Vcc inputs. The GPIO would only power data and a handful of > > switches. > > You're not answering my question. "Supports 5V" can mean one of two > things, both of which meanings I've stated. "Assign 5V to each pin" > essentially carries the same two meanings. The Starter Kit says 5V, 1 amp max. I assume that means drive current, if that's the correct phrase. :-) > Can you power the FPGA GPIO from 5V, so that you get 5V out? Or can it > just stand to get 5V on an input without the FPGA operating incorrectly? I don't know. :-) So far as my thinking goes: I assign a meaning to some of the GPIO pins, and whatever magic needs to happen under the hood to get that particular pin to work at 3.3V or 5V happens without me having to worry about it at this level. > > Definitely a 5V part circa 1986. Ceramic 16 MHz package. > > Great. Wonderful. 5V logic comes in a lot of different flavors. For the > purposes of our discussion, you can break it into the set of all parts > that need a full 5V on a logic input to correctly interpret the answer as > a '1', and parts that can get by with 3.3V. So which is it? I don't know. I looked at the data sheet on the 1986 80386DX, and it showed a number of pins being used which indicated 5V. I assumed the entire chip was 5V because I later remember receiving a Pentium-233 chip that was labeled 3.3V explicitly on the case, which made me think that this later chip was something new at that voltage. But, I could be wrong on all counts. Best regards, Rick C. HodginArticle: 157491
Tim Wescott <seemywebsite@myfooter.really> wrote: > I think the OP needs to learn a bit more about the wonderful world of > logic levels (and Rick H. -- feel free to ask). My feeling would be to try and avoid using 5V parts. For instance, the Intel Quark 1000 is a 400MHz Pentium-class CPU being aimed at microcontroller applications. It probably isn't quite as easy to manage (it's a BGA package not a PLCC) but is rather more modern. The Quark itself is a bit of a camel, but it's the modern equivalent of a 386. The alternative would be an Atom, VIA or some other 'embedded' x86 processor. All of these might be harder to handle than the slower 386 though. The Quark comes on the Intel Galileo board which can be connected to the Cyclone V GX via PCIe, which is one way to join them. > My inclination is that if the guy needs to make a board anyway, he may as > well put the FPGA on it, and not mess around trying to make a board to > stick onto the starter kit. I don't know enough about the parts involved > to know if, at that point, he can get away with putting them together > without level translators or not. I suspect not -- I'll bet that 386 has > 5V CMOS-level inputs, and that the Altera part can't be run with 5V on the > I/O. But I am NOT going to take the time to dig through data sheets -- > the OP can do that as part of his education in digital hardware design. I'd suggest the OP avoid making an FPGA board: a 6/8 layer board with a ton of low-voltage high-current FPGA power supplies, BGA escapes, signal integrity etc, is a somewhat more complex endeavour than a basic-ish 2/4 layer board to hook into an expansion header on the existing board. Not least because you can design that in a free/cheap tool, you don't have to pay for fancy PCB tools, and you can get it fabbed anywhere. I very much doubt it'll be possible without level translators, which doubles the complexity of the addon board. TheoArticle: 157492
On Friday, December 12, 2014 2:13:15 PM UTC-5, Tim Wescott wrote: > I think the OP needs to learn a bit more about the wonderful world of > logic levels (and Rick H. -- feel free to ask). I have no plans to make a board at the present time. Just some interfacing cabling between the Altera Cyclone V GX starter kit I purchased and the ethernet board I bought. The rest I plan to do in the native kit. Eventually (whenever I get the CPU designed and working), I would like to see about getting an actual device manufactured with its own real hardware. And ultimately, I would like to even create my own CPUs from raw silicon (epitaxial processes up through packaging). Big dream though. :-) > My inclination is that if the guy needs to make a board anyway, he may as > well put the FPGA on it, and not mess around trying to make a board to > stick onto the starter kit. I don't know enough about the parts involved > to know if, at that point, he can get away with putting them together > without level translators or not. I suspect not -- I'll bet that 386 has > 5V CMOS-level inputs, and that the Altera part can't be run with 5V on the > I/O. But I am NOT going to take the time to dig through data sheets -- > the OP can do that as part of his education in digital hardware design. The whole FPGA to feed an 80386 was me thinking about whether or not such a thing were possible. I do not plan to do it. Once I began thinking about it I saw that the 80386 used 5V, and that I had only remembered the Altera documentation saying 1.1V?? to 3.3V. I did not remember where it said 5V anywhere. Best regards, Rick C. HodginArticle: 157493
rickman wrote: > On 12/11/2014 9:08 AM, GaborSzakacs wrote: [snip] > > I agree with everything you've said up to this point. > > >> Now we come to why you want a second flip-flop. A metastable >> event has the effect of increasing the clock to output timing >> of the first flip-flop. There is theoretically no upper bound >> on the amount of time that the event can last, but the chances >> of the event lasting any particular length of time go down >> *very* quickly as the length of time goes up. In real world >> applications, there are secondary processes (mostly system >> noise) that "help" an event to end in a way similar to a >> coin standing on edge on a bar where there are a lot of >> patrons picking up and setting down mugs. In any case you >> can see that you want "slack" time in the path from this >> first flip-flop to all other synchronous elements. > > This all sounds right except the part about the noise. Is there a way > to prove that? I have never heard an analysis that proves that noise > has any real impact on the rate of resolution of metastability. I'm not > saying this is wrong, I just haven't seen anything like a proof. > No I don't have proof, but it stands to reason that noise would help to give a finite upper bound to a metastable event even if it is still only cutting off a statistically small tail > >> The second flop is an easy way to ensure the ease of adding a >> lot of slack in the path. However it has a secondary impact >> on the chance of failure. When the first flop has an event >> that increases its time such that all subsequent flops no >> longer meet setup requirements, your circuit will fail. With >> the second flip-flop in place, instead of having an upper >> bound after which the circuit will fail, what you need for >> failure is an event that causes the second flip-flop to go >> metastable. This means that instead of the probability of >> an event being greater than "x" you now are looking at the >> probability of an event being exactly "x" +/- something >> very small. So even if the first path doesn't have the >> slack to prevent a metastable event from violating the >> setup/hold of the second flop, the system won't actually >> fail unless the event is within a very small range. This >> dramatically reduces the MTBF. > > I don't follow this at all. The same reasoning applies to one case as > the other and any metastability resolution time of the first FF greater > than "x" will make the second FF go metastable. However, not the second > FF also has a chance of resolving before time "y" before impacting > functional circuitry. The problem is mitigated both by the fact that > both FFs have to persist in metastable state as well as the fact that > the slack time for the "empty" path is typically *much* longer than > needed to resolve the metastable event and *not* propagate it to the > second FF other than as an exceedingly rare event (billions of operating > years). > No. The second flip-flop has the same sort of metastable window as the first. If the first flop misses that window because the metastability was longer, then the second flop will resolve on the following clock cycle. I think you may be under the misapprehension that the metastable state means that the first flop is outputing a "1/2" rather that "0" or "1" logic level and any sampling during that time would cause metastability in the second flop. In fact that's not the case. > >> Now deciding whether you really need a second flop depends >> on requirements for MTBF and the amount of slack you can >> give between the first flop and all of its loads. At a low >> clock frequency it's likely that you can ensure enough slack >> that you don't need the second flop to meet the MTBF requirements. >> A slower clock also means that you add more delay by inserting >> another flop. If latency is an issue, you probably don't want >> to do that. It's a bit counterintuitive, but in this case you >> could actually improve MTBF without adding delay by using a >> second flop on the opposite clock edge, assuming you can meet >> timing to subsequent flops in 1/2 clock period. > > I don't agree. You are better off with one looooong period than two > short ones. Part of the reason is that the added FF has delays that > subtract from the slack time, but more importantly the exponential term > is multiplicative, er... exponential actually, while having two delays > is just additive. e^2n >> 2*e^n > Your reasoning only follows if you still believe that the second flop goes metastable when the first event lasts any amount greater than "x". -- GaborArticle: 157494
Rick C. Hodgin wrote: > On Friday, December 12, 2014 2:20:35 AM UTC-5, Tim Wescott wrote: >> On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: >>> It says you can assign 5V to each pin, which I assume means true I/O. I >>> would use externally regulated 5V power supply, as from an old case, >>> for all CPU Vcc inputs. The GPIO would only power data and a handful of >>> switches. >> You're not answering my question. "Supports 5V" can mean one of two >> things, both of which meanings I've stated. "Assign 5V to each pin" >> essentially carries the same two meanings. > > The Starter Kit says 5V, 1 amp max. I assume that means drive current, > if that's the correct phrase. :-) That's pretty clearly the input power to the board. You can be very sure that there are a number of DC-DC point-of-load converters to make lower voltages from this, and the 5V itself is unlikely to power any other board-level component directly. -- GaborArticle: 157495
On Friday, December 12, 2014 2:50:16 PM UTC-5, Gabor wrote: > Rick C. Hodgin wrote: > > On Friday, December 12, 2014 2:20:35 AM UTC-5, Tim Wescott wrote: > >> On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: > >>> It says you can assign 5V to each pin, which I assume means true I/O. I > >>> would use externally regulated 5V power supply, as from an old case, > >>> for all CPU Vcc inputs. The GPIO would only power data and a handful of > >>> switches. > >> You're not answering my question. "Supports 5V" can mean one of two > >> things, both of which meanings I've stated. "Assign 5V to each pin" > >> essentially carries the same two meanings. > > > > The Starter Kit says 5V, 1 amp max. I assume that means drive current, > > if that's the correct phrase. :-) > > That's pretty clearly the input power to the board. You can be very > sure that there are a number of DC-DC point-of-load converters to > make lower voltages from this, and the 5V itself is unlikely to power > any other board-level component directly. If you go here: http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=830&PartNo=4 And download the C5G User Manual, and look on page 52, you'll find this quote: -----[ Start ]----- The The 40-pin header connects header connects header connects directly to 36 pins of the Cyclone V GX FPGA, and also provides DC +5V (VCC5), DC 3.3V VCC3 and two GND pins. Figure 3-25 shows the I/O distribution of the GPIO connector. The maximum power consumption of the daughter card that connects to GPIO port is shown in Table 3-18: Table 3-18: Power Supply of the Expansion Header Supplied Voltage Max Current Limit 5V 1A 3.3V 1.5A 1.5A -----[ End ]----- Best regards, Rick C. HodginArticle: 157496
Rick C. Hodgin wrote: > On Friday, December 12, 2014 2:50:16 PM UTC-5, Gabor wrote: >> Rick C. Hodgin wrote: >>> On Friday, December 12, 2014 2:20:35 AM UTC-5, Tim Wescott wrote: >>>> On Thu, 11 Dec 2014 14:21:20 -0800, Rick C. Hodgin wrote: >>>>> It says you can assign 5V to each pin, which I assume means true I/O. I >>>>> would use externally regulated 5V power supply, as from an old case, >>>>> for all CPU Vcc inputs. The GPIO would only power data and a handful of >>>>> switches. >>>> You're not answering my question. "Supports 5V" can mean one of two >>>> things, both of which meanings I've stated. "Assign 5V to each pin" >>>> essentially carries the same two meanings. >>> The Starter Kit says 5V, 1 amp max. I assume that means drive current, >>> if that's the correct phrase. :-) >> That's pretty clearly the input power to the board. You can be very >> sure that there are a number of DC-DC point-of-load converters to >> make lower voltages from this, and the 5V itself is unlikely to power >> any other board-level component directly. > > If you go here: > http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=167&No=830&PartNo=4 > > And download the C5G User Manual, and look on page 52, you'll find > this quote: > > -----[ Start ]----- > The The 40-pin header connects header connects header connects directly to > 36 pins of the Cyclone V GX FPGA, and also provides DC +5V (VCC5), DC 3.3V VCC3 > and two GND pins. Figure 3-25 shows the I/O distribution of the GPIO connector. > The maximum power consumption of the daughter card that connects to GPIO port > is shown in Table 3-18: > > Table 3-18: > Power Supply of the Expansion Header > Supplied Voltage Max Current Limit > 5V 1A > 3.3V 1.5A 1.5A > -----[ End ]----- > > Best regards, > Rick C. Hodgin Fine, then its a power supply output. It makes no implication that the I/O pins can drive to 5V, only that 5V is supplied to power your add-on module. While you could use that to power a 5V component like your antique CPU, it was probably intended as a source for circuitry or additional supplies that don't directly connect to the I/O pins. For example you might want to add a USB interface which requires 5V for its connector, even though any modern PHY chip will run at 3.3V for all logic I/O. By the way, the I/O pins won't drive at 1A either. -- GaborArticle: 157497
On 12/12/2014 2:45 PM, GaborSzakacs wrote: > rickman wrote: >> On 12/11/2014 9:08 AM, GaborSzakacs wrote: > > [snip] > >> >> I agree with everything you've said up to this point. >> >> >>> Now we come to why you want a second flip-flop. A metastable >>> event has the effect of increasing the clock to output timing >>> of the first flip-flop. There is theoretically no upper bound >>> on the amount of time that the event can last, but the chances >>> of the event lasting any particular length of time go down >>> *very* quickly as the length of time goes up. In real world >>> applications, there are secondary processes (mostly system >>> noise) that "help" an event to end in a way similar to a >>> coin standing on edge on a bar where there are a lot of >>> patrons picking up and setting down mugs. In any case you >>> can see that you want "slack" time in the path from this >>> first flip-flop to all other synchronous elements. >> >> This all sounds right except the part about the noise. Is there a way >> to prove that? I have never heard an analysis that proves that noise >> has any real impact on the rate of resolution of metastability. I'm >> not saying this is wrong, I just haven't seen anything like a proof. >> > > No I don't have proof, but it stands to reason that noise would help > to give a finite upper bound to a metastable event even if it is still > only cutting off a statistically small tail That is why I asked. I don't agree that "it stands to reason". I would at least like to hear a mechanism proposed if not proof which could be in the form of measurement if nothing else. >>> The second flop is an easy way to ensure the ease of adding a >>> lot of slack in the path. However it has a secondary impact >>> on the chance of failure. When the first flop has an event >>> that increases its time such that all subsequent flops no >>> longer meet setup requirements, your circuit will fail. With >>> the second flip-flop in place, instead of having an upper >>> bound after which the circuit will fail, what you need for >>> failure is an event that causes the second flip-flop to go >>> metastable. This means that instead of the probability of >>> an event being greater than "x" you now are looking at the >>> probability of an event being exactly "x" +/- something >>> very small. So even if the first path doesn't have the >>> slack to prevent a metastable event from violating the >>> setup/hold of the second flop, the system won't actually >>> fail unless the event is within a very small range. This >>> dramatically reduces the MTBF. >> >> I don't follow this at all. The same reasoning applies to one case as >> the other and any metastability resolution time of the first FF >> greater than "x" will make the second FF go metastable. However, not >> the second FF also has a chance of resolving before time "y" before >> impacting functional circuitry. The problem is mitigated both by the >> fact that both FFs have to persist in metastable state as well as the >> fact that the slack time for the "empty" path is typically *much* >> longer than needed to resolve the metastable event and *not* propagate >> it to the second FF other than as an exceedingly rare event (billions >> of operating years). >> > > No. The second flip-flop has the same sort of metastable window as the > first. If the first flop misses that window because the metastability > was longer, then the second flop will resolve on the following clock > cycle. I think you may be under the misapprehension that the metastable > state means that the first flop is outputing a "1/2" rather that "0" > or "1" logic level and any sampling during that time would cause > metastability in the second flop. In fact that's not the case. I don't think that is correct. A metastable event can create all sorts of problems on the output including oscillations and indeterminate levels. These can produce metastability in the second stage without having to hit a bullet with a bullet. >>> Now deciding whether you really need a second flop depends >>> on requirements for MTBF and the amount of slack you can >>> give between the first flop and all of its loads. At a low >>> clock frequency it's likely that you can ensure enough slack >>> that you don't need the second flop to meet the MTBF requirements. >>> A slower clock also means that you add more delay by inserting >>> another flop. If latency is an issue, you probably don't want >>> to do that. It's a bit counterintuitive, but in this case you >>> could actually improve MTBF without adding delay by using a >>> second flop on the opposite clock edge, assuming you can meet >>> timing to subsequent flops in 1/2 clock period. >> >> I don't agree. You are better off with one looooong period than two >> short ones. Part of the reason is that the added FF has delays that >> subtract from the slack time, but more importantly the exponential >> term is multiplicative, er... exponential actually, while having two >> delays is just additive. e^2n >> 2*e^n >> > > Your reasoning only follows if you still believe that the second flop > goes metastable when the first event lasts any amount greater than "x". Correct. But regardless, you are ignoring the impact of the longer time period on the exponential. It is *ginormous*. -- RickArticle: 157498
rickman wrote: > On 12/12/2014 2:45 PM, GaborSzakacs wrote: >> rickman wrote: >>> On 12/11/2014 9:08 AM, GaborSzakacs wrote: >> >> [snip] >> >> No. The second flip-flop has the same sort of metastable window as the >> first. If the first flop misses that window because the metastability >> was longer, then the second flop will resolve on the following clock >> cycle. I think you may be under the misapprehension that the metastable >> state means that the first flop is outputing a "1/2" rather that "0" >> or "1" logic level and any sampling during that time would cause >> metastability in the second flop. In fact that's not the case. > > I don't think that is correct. A metastable event can create all sorts > of problems on the output including oscillations and indeterminate > levels. These can produce metastability in the second stage without > having to hit a bullet with a bullet. > > Think again. A flip-flop has positive feedback gain. Oscillation is definitely not a possibility. Somewhere inside the flop you could sit at a threshold voltage for a while, but once you start to resolve, the swing will be monotonic. The next flop doesn't get fed directly by the node sitting at its threshold, but from a buffered copy. You'd need a buffer that oscillates when its input sits near a threshold for a nanosecond or two. You won't find anything like that in an FPGA. -- GaborArticle: 157499
On 12/12/2014 5:05 PM, GaborSzakacs wrote: > rickman wrote: >> On 12/12/2014 2:45 PM, GaborSzakacs wrote: >>> rickman wrote: >>>> On 12/11/2014 9:08 AM, GaborSzakacs wrote: >>> >>> [snip] > >>> >>> No. The second flip-flop has the same sort of metastable window as the >>> first. If the first flop misses that window because the metastability >>> was longer, then the second flop will resolve on the following clock >>> cycle. I think you may be under the misapprehension that the metastable >>> state means that the first flop is outputing a "1/2" rather that "0" >>> or "1" logic level and any sampling during that time would cause >>> metastability in the second flop. In fact that's not the case. >> >> I don't think that is correct. A metastable event can create all >> sorts of problems on the output including oscillations and >> indeterminate levels. These can produce metastability in the second >> stage without having to hit a bullet with a bullet. >> >> > > Think again. A flip-flop has positive feedback gain. Oscillation > is definitely not a possibility. Somewhere inside the flop you > could sit at a threshold voltage for a while, but once you start > to resolve, the swing will be monotonic. The next flop doesn't > get fed directly by the node sitting at its threshold, but from > a buffered copy. You'd need a buffer that oscillates when its > input sits near a threshold for a nanosecond or two. You won't > find anything like that in an FPGA. There is some 40 years of experience and documentation showing the effects of metastability. Please do a little research on the topic. -- Rick
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z