Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Apr 12, 3:31=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > rickman <gnu...@gmail.com> wrote: > > On Apr 11, 9:16 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > >> rickman <gnu...@gmail.com> wrote: > >> > I guess the part I'm unclear on is whether this is truly significant > >> > in a typical design. If the branching is not a large part of a desig= n > >> > or if the branching is only in the control logic and not the data > >> > paths I would expect the difference to be small or negligible. > >> Well, there isn't that much you can do about it. > > I'm not sure what you mean. =A0The point is that is you need to reduce > > power in your design and it has certain features, this may be a useful > > technique for reducing the power. =A0 > > Say you have an XOR gate where the two inputs come from different > FF's clocked on the same clock, but through different paths. > If you can make those two paths the same length, then there won't > be extra transitions. > > Now, with ASIC logic you get a lot of control over the wiring, > and could work to get path lengths equal. =A0With FPGAs, you don't > have so much control. =A0Even more, the routing paths are often > buffered, even though you don't see them. =A0 I'm not saying this is not a good idea, but it is not what I was suggesting. Controlling the delay of paths so that edges line up would be pretty hard to do in my opinion. I expect most devices focus on just getting the design routed. > But often the circuits are designed for high-speed, and very > rarely low power. =A0I do remember the 74L series TTL, slower and > lower power. =A0 If you make the metal traces narrower (reduce > capacitance) you increase the resistance, slowing down the signal. > You can trade speed for power in many ways. I'm not suggesting a change to chips. In fact, from what I see, most designs on most chips would get little or no benefit because of the high static power. But on low power devices there may be some advantage to moving the registers to logical block inputs rather than the outputs. This may require more registers, but that shouldn't be an issue since FPGAs are so register rich. BTW, when I say "move the registers", I don't mean changing the chip design. This is just a way of looking at a design and how you implement the clock enables although it may require some register duplication. > > I would like to work with the > > Silicon Blue devices to measure some power figures for a variety of > > designs I've done. =A0When I get to that I may try this technique and > > see if it gives useful results. > > So they give you more control than the usual FPGA tools? I'm not sure what you are thinking here. What sort of control are you looking for? RickArticle: 151476
On Apr 12, 2:44=A0am, backhus <goous...@googlemail.com> wrote: > On 12 Apr., 00:24, rickman <gnu...@gmail.com> wrote: > > > > > In considering the nature of power consumption in FPGA devices it > > occurred to me to ask what components in the FPGA are responsible for > > most of the power. =A0The candidates are clock trees, routing, LUT and > > misc logic and finally, FFs. > > > In CMOS devices the power consumed comes from charging and discharging > > capacitance. =A0So I would expect the clock trees with their constant > > toggling to be a likely candidate for the most power consumption. > > Second on my list is the routing since I would expect the capacitance > > to be significant. =A0I expect the LUTs to be next but may be fairly > > close to the power consumed in the FFs. > > > With that in mind, I think the typical way of reducing power by the > > use of clock enables on registers which are on the output of logic > > block may not be optimal. =A0This is the part that I have not fully > > analyzed, but I think it could be significant. > > > When a register on the output of a logic block is enabled, routing and > > logic feeding the register inputs will have dissipated power but after > > the clock, routing and logic fed by the output will also dissipate > > power regardless whether the next register will be enabled on the next > > clock or not! =A0In other words, the routing and logic can dissipate > > power just because the inputs to the logic are changing even when that > > logic is not needed. > > > If the registers are placed at the input to a function block the > > routing and logic will only dissipate power when the registers are > > enabled allowing the register outputs and the logic inputs to change. > > Why is this different from output registers? =A0If your design is a > > linear pipeline then it is not different. =A0But that is the exception. > > With branching and looping of logic flow an output can feed multiple > > other logic blocks. =A0If multiple inputs to logic change at different > > times this will also increase dissipation. =A0When those other logic > > blocks do not need this new data the power used in the routing and > > logic is wasted. > > > I guess the part I'm unclear on is whether this is truly significant > > in a typical design. =A0If the branching is not a large part of a desig= n > > or if the branching is only in the control logic and not the data > > paths I would expect the difference to be small or negligible. > > > I don't think I am the first person to think of this. =A0Since this is > > not a part of vendors recommendations I think it is a pretty good > > indicator that it is not a large enough factor to be useful. =A0Has > > anyone seen an analysis on this? > > > Rick > > Hi Rick, > have you taken a look at Xilinx Xpower Analyzer? > For a given design it calculates the power consumption with regard to > many variables. > It's interesting to see the impact of these variables on the power > consumption. > Regardles of the brand, the tendency should be similar for other FPGAs > too. > And maybe other companies have similar tools too. > > Have a nice synthesis > =A0 Eilert One big difference between Xilinx and SiBlue is that the Xilinx parts have such enormous static power. They do have their Coolrunner CPLD series, but they are very limited in size and the prices go up very quickly with size. So I don't consider them to be practical for the sort of work that where power is a major issue. Small designs will use small amounts of power anyway. The SiBlue parts are shown going as high as 16 kLUTs which may not be large in a Xilinx sense, but it is big enough to use in a lot of apps. RickArticle: 151477
On Apr 12, 3:08=A0am, hal-use...@ip-64-139-1-69.sjc.megapath.net (Hal Murray) wrote: > In article <48a5df08-d39a-4c4e-bb4c-99b37bc62...@l18g2000yqm.googlegroups= .com>, > > =A0rickman <gnu...@gmail.com> writes: > >In CMOS devices the power consumed comes from charging and discharging > >capacitance. > > That was true in the old days. > > With modern (really) thin oxide, you have to consider leakage currents. We are not on the same page. I am tossing out devices like the monster Xilinx and Altera parts where your main concern is just getting enough power into the part to allow it to boot without glitching itself. The SiBlue devices have a static current in the low double digit uA range. The dynamic current is single digit mA for a chip filled with 16 bit counters running at 32 MHz. So clearly the dynamic power consumption is much more significant than the static for these devices. RickArticle: 151478
On Apr 12, 7:15=A0am, Marc Jet <jetm...@hotmail.com> wrote: > Your thoughts about where the power goes seem mostly correct. > > My experience is the following. =A0You can't do much about static power, > except by playing with the external voltages. =A0Then there is I/O > consumption, like driving your outputs to the PCB, and hopefully no > floating input pins. =A0Sometimes you can tune those things. =A0But once > this is done, you have to live with the hardware. > > In software, the ONE major cause for current consumption is toggling > interconnect. > > Each route in reality consists of buffers, tracks, receivers, etc. > But the most important thing you can do to reduce the consumption of > the route is: > > a) make the route "shorter", and/or > b) make it toggle less often. > > Little else can be done, and therefore it's usually not necessary to > go into the details about what elements physically make up the route. > > Clocks have a high togglerate, and high fanout converts it into lots > of routes used. =A0Therefore the clock tree obviously is one of the > major consumers (if not the top one). =A0Unfortunately the clock tree is > also one of the most difficult things to tune or get rid off. =A0Slow > clocks (instead of clock enables) for the slow portions of your design > are probably the most fruitful thing to do with regards to clock. > > Data routes however, can often be influenced at the HDL level (by > restructuring the design). =A0Look at your design in the floorplanner > and FPGA editor, and at the CLB description in the datasheet. =A0Figure > out a good way of mapping the functionality onto the available > hardware. =A0Express that in your HDL, and the tools will (to some > degree) follow you through without explicit floorplanning or vendor > specific primitives. > > The only other thing worth mentioning is LUT flicker. =A0The different > input signals each have distinct arrival times (steming from their > individual route delay). =A0Each time an input arrives, the LUT output > may toggle (depending on terms). =A0If the LUT drives a "long" route, > this causes high consumption. =A0Several approaches can be used to > reduce it. =A0A simple one is to use few logic levels. =A0Registering > after LUT is usually done in the same slice, thus a very short route > with little consumption during flicker. =A0It has a good net result for > dense data with high togglerate, despite the bigger clock tree. > Obvously, the opposite can be true for slow data. Thanks for your comments Marc. The issue I am addressing is what you call "LUT flicker". When an output register is not enabled there is no need for the inputs to be changing which is what causes LUT flicker power consumption. By moving the registers to the inputs of a logic block, the inputs will only change when needed reducing LUT flicker and routing power consumption. RickArticle: 151479
rickman <gnuarm@gmail.com> wrote: (snip, I wrote) >> Now, with ASIC logic you get a lot of control over the wiring, >> and could work to get path lengths equal. With FPGAs, you don't >> have so much control. Even more, the routing paths are often >> buffered, even though you don't see them. > I'm not saying this is not a good idea, but it is not what I was > suggesting. Controlling the delay of paths so that edges line up > would be pretty hard to do in my opinion. I expect most devices focus > on just getting the design routed. (snip) > I'm not suggesting a change to chips. In fact, from what I see, most > designs on most chips would get little or no benefit because of the > high static power. But on low power devices there may be some > advantage to moving the registers to logical block inputs rather than > the outputs. This may require more registers, but that shouldn't be > an issue since FPGAs are so register rich. I believe the high leakage is still only on the high-end chips, which are not likely the ones you want for low power designs. > BTW, when I say "move the registers", I don't mean changing the chip > design. This is just a way of looking at a design and how you > implement the clock enables although it may require some register > duplication. That is an interesting idea. I know I have seen the tools do strange things with registers. I believe I have seen combining registers when two had the same inputs and same clocks. That would make it harder to do what you say in the design, and have it stick all the way through. For this to make a large difference, you have to have a very large fraction of the signals registered at each LUT output, but systolic arrays do tend to do that. -- glenArticle: 151480
On 04/11/2011 05:24 PM, rickman wrote: > In considering the nature of power consumption in FPGA devices it > occurred to me to ask what components in the FPGA are responsible for > most of the power. The candidates are clock trees, routing, LUT and > misc logic and finally, FFs. I had a problem recently where I had extreme EMI from transitions of some CMOS chips, single-gate parts. When I measured the shoot-through transient, I was astounded that is was in the neighborhood of 2 A per chip! I switched to a different family, and it was reduced to the point that I couldn't make a valid measurement. What I take away from that is that some digital designs are made with no consideration of dynamic current, and others take pains to reduce it. This, of course, is built into the chips, and you can't affect it much with the FPGA configuration. The dynamic loading of the FFs and LUTs is kind of at the mercy of the routing tools, unless you are going to hand-route (shudder!) Presumably, clock-enable won't affect this part of the dynamic power, as the FFs weren't going to change state either way, so you'd only save a little power in the FF itself. JonArticle: 151481
"rickman" <gnuarm@gmail.com> wrote in message news:48a5df08-d39a-4c4e-bb4c-99b37bc624b1@l18g2000yqm.googlegroups.com... > In considering the nature of power consumption in FPGA devices it > occurred to me to ask what components in the FPGA are responsible for > most of the power. The candidates are clock trees, routing, LUT and > misc logic and finally, FFs. > > In CMOS devices the power consumed comes from charging and discharging > capacitance. So I would expect the clock trees with their constant > toggling to be a likely candidate for the most power consumption. > Second on my list is the routing since I would expect the capacitance > to be significant. I expect the LUTs to be next but may be fairly > close to the power consumed in the FFs. > > With that in mind, I think the typical way of reducing power by the > use of clock enables on registers which are on the output of logic > block may not be optimal. This is the part that I have not fully > analyzed, but I think it could be significant. > > When a register on the output of a logic block is enabled, routing and > logic feeding the register inputs will have dissipated power but after > the clock, routing and logic fed by the output will also dissipate > power regardless whether the next register will be enabled on the next > clock or not! In other words, the routing and logic can dissipate > power just because the inputs to the logic are changing even when that > logic is not needed. > > If the registers are placed at the input to a function block the > routing and logic will only dissipate power when the registers are > enabled allowing the register outputs and the logic inputs to change. > Why is this different from output registers? If your design is a > linear pipeline then it is not different. But that is the exception. > With branching and looping of logic flow an output can feed multiple > other logic blocks. If multiple inputs to logic change at different > times this will also increase dissipation. When those other logic > blocks do not need this new data the power used in the routing and > logic is wasted. > > I guess the part I'm unclear on is whether this is truly significant > in a typical design. If the branching is not a large part of a design > or if the branching is only in the control logic and not the data > paths I would expect the difference to be small or negligible. > > I don't think I am the first person to think of this. Since this is > not a part of vendors recommendations I think it is a pretty good > indicator that it is not a large enough factor to be useful. Has > anyone seen an analysis on this? > > Rick As far as I know the clocks on FFs are not actually enabled, that is, the clocks are not gated on and off. The enable works by routing the Q of the FF back through a mux whose other input is the FFs D input. The 'enable' controls the mux. This gets round clock skew problems that would exist with a gated clock system. Therefore all FFs on a clock tree will permanently clock and no power saving can be made. PhilArticle: 151482
Some FPGAs have configurable clock trees that are only enabled in areas where they need to serve FF's. There are also some FPGAs with enablable clock buffers that can be used to safely gate off the clock to a (usually large) group of FFs instead of disabling their clock enables. AndyArticle: 151483
Jon Elson <jmelson@wustl.edu> wrote: > On 04/11/2011 05:24 PM, rickman wrote: >> In considering the nature of power consumption in FPGA devices it >> occurred to me to ask what components in the FPGA are responsible for >> most of the power. The candidates are clock trees, routing, LUT and >> misc logic and finally, FFs. > I had a problem recently where I had extreme EMI from transitions of > some CMOS chips, single-gate parts. When I measured the shoot-through > transient, I was astounded that is was in the neighborhood of 2 A per > chip! I switched to a different family, and it was reduced to the point > that I couldn't make a valid measurement. It depends on where you set the thresholds on the FETs. If I understand one page that comes up with google "shoot through" cmos, it is easier on lower voltage processes, such that VthP + VthN can be more than Vdd. > What I take away from that is that some digital designs are made with no > consideration of dynamic current, and others take pains to reduce it. I am not sure about it, but I believe that as you make the gate faster, which means that the driving transistor comes on sooner, it gets worse. Again, optimize for speed or power. -- glenArticle: 151484
Hey all -- I've got a client breathing down my neck, asking if there's anything that can be done to accelerate a design we're doing for them. The one piece of the design that's really going to take me some serious time is figuring out the PCI Express nonsense. The target FPGA is an Arria II GX. We plan to implement an 8 lane, Gen 1 PCIe link over a cable back to the host processor. I was thinking of using the PLDA EZDMA core to try to speed the design up; they're hoping to have this entire thing up and running in weeks. Does anyone have any experience with PCIe on the Arria II GX, either with the help of someone's expensive IP core or just going straight into Altera's hard IP block? Thanks, Rob -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 151485
On Apr 12, 9:21=A0pm, Rob Gaddi <rga...@technologyhighland.com> wrote: > Hey all -- > > I've got a client breathing down my neck, asking if there's anything > that can be done to accelerate a design we're doing for them. =A0The one > piece of the design that's really going to take me some serious time is > figuring out the PCI Express nonsense. > > The target FPGA is an Arria II GX. =A0We plan to implement an 8 lane, Gen > 1 PCIe link over a cable back to the host processor. =A0I was thinking of > using the PLDA EZDMA core to try to speed the design up; they're hoping > to have this entire thing up and running in weeks. > > Does anyone have any experience with PCIe on the Arria II GX, either > with the help of someone's expensive IP core or just going straight into > Altera's hard IP block? > > Thanks, > Rob > > -- > Rob Gaddi, Highland Technology > Email address is currently out of order I've used the PLDA core on a Virtex 5 LXT, but as I understand the user interface should be similar. My only comment is that the core itself is quite complex, and the documentation seemed to be written more for the core designers than for the core users. Their technical support is quite helpful (and quite necessary given the documentation), and still I found I needed to start by hacking on their demo program to get my system up and running in any reasonable amount of time. Once you get a handle on the user interface, it's much easier than writing all of the transport layer stuff yourself, assuming what you really want is DMA. Multichannel works fine. We bought it mainly for the scatter/gather feature, and then the software guys decided to use kernel contiguous memory anyway... I'm not sure what Altera provides for device drivers with their own demo apps. The PLDA stuff is much better than the Xilinx offerings. Also we may be doing a port to Altera on this same project, so it's good to have a core available on both platforms. Regards, GaborArticle: 151486
Ok, I answer myself. rxrecclk have frequency 12.5MHz.Article: 151487
On Apr 10, 2:10=A0pm, Rikard Astrof <rikard.ast...@gmail.com> wrote: > On Apr 10, 8:00=A0pm, "PovTruffe" <PovTa...@gaga.invalid> wrote: > > > I have found this:http://www.electronicsnews.com.au/news/altium-relocat= es-from-sydney-t... > > > Nothing was mentioned about Altium Designer being discontinued > > If you read the comments at the end of the article, specially those > made by Alan Smith, he's saying last week they retrenched most of the > AU staff (http://www.google.com/search?q=3Dsite%3Alinkedin.com+altium > +limited) with the hope of setting up a team in China, well since then > inside sources have confirmed that the vast majority of the people > that were supposedly meant to head off to china to setup the new > offices/team have decided to take voluntary redundancy, which means as > of today Altium Limited and their product Altium designer is in limbo. > > To furthermore, lay-offs will be made this week in the San Diego > offices, this will include transfering all support calls to the China > offices. What rubbish. I *am* one of the management team for Altium, managing North America and there is no plan to push support calls into China. This is completely baseless. We have development in the US, China, Kiev, Australia and the Netherlands. Altium Designer is not going away and some development will continue to be done in our two development centers in Australia (Sydney and Hobart). Do some research on any major software vendor's global development centers and you'll find development going on all over the world. No major EDA/CAD/ CAM company that I know of has a single development center in any single country. True there has been some retrenchment but the sky is well and truly, *not* falling. Matt Berggren AltiumArticle: 151488
On Apr 12, 4:27 pm, Andy <jonesa...@comcast.net> wrote: > Some FPGAs have configurable clock trees that are only enabled in > areas where they need to serve FF's. There are also some FPGAs with > enablable clock buffers that can be used to safely gate off the clock > to a (usually large) group of FFs instead of disabling their clock > enables. I remember parts from Concurrent Logic which was acquired by Atmel that had this feature. The clock line of an entire column could be turned off. I want to say this was a configuration feature, not under control of the active design, but it was long time ago. Still, the fact that a FF is receiving a clock does not mean it is dissipating power because of it. The power consumption comes from the gates changing state. If the enable to the FF is not set and the output won't change, then there is no power consumption in the FF. I would expect that any power consumption caused by the capacitance of the clock input charging and discharging would be lumped into the clock tree dissipation. RickArticle: 151489
Great discussion and interesting to hear others viewpoints. My take on what has been mentioned so far is that there are two major sources of "controllable" power consumption in a typical large-scale fpga design: combinatorial "flicker" (good word, Marc) and clock enables. (although the word "controllable" is used loosely here.) It seems that the most control a designer has in reducing flicker is to reduce logic levels. The first obvious step to this approach is to add pipeline regs, though I suppose one could even influence the actual combinatorial implementation in some creative way. But I caution that the tools can undermine any attempts at reducing flicker to a certain degree through optimization techniques such as resource sharing and reg re-timing, just to name two. Also, it would appear to me extremely hard to get a grasp on how much power can be saved by such measures, especially a design is re-routed over and over again, causing the tool to use a different "path" of optimization steps for each iteration. Clock enables are useful as an additional functional input. Of course, they are preferred to actually gating the clock. But clock enables could also be used for "don't care" data paths to reduce power. Imagine a 64-bit data bus transitioning through a pipeline of regs. Propagating a data valid signal that arrives in parallel with the data bus into the pipeline is a convenient way to generate clock enables at each subsequent pipe stage. JohnArticle: 151490
Blue Banshee <bluebansh3e@gmail.com> wrote: > On Apr 10, 2:10 pm, Rikard Astrof <rikard.ast...@gmail.com> wrote: >> On Apr 10, 8:00 pm, "PovTruffe" <PovTa...@gaga.invalid> wrote: >> >> > I have found this:http://www.electronicsnews.com.au/news/altium-relocates-from-sydney-t... >> >> > Nothing was mentioned about Altium Designer being discontinued >> >> If you read the comments at the end of the article, specially those >> made by Alan Smith, he's saying last week they retrenched most of the >> AU staff (http://www.google.com/search?q=site%3Alinkedin.com+altium >> +limited) with the hope of setting up a team in China, well since then >> inside sources have confirmed that the vast majority of the people >> that were supposedly meant to head off to china to setup the new >> offices/team have decided to take voluntary redundancy, which means as >> of today Altium Limited and their product Altium designer is in limbo. >> >> To furthermore, lay-offs will be made this week in the San Diego >> offices, this will include transfering all support calls to the China >> offices. > > What rubbish. I *am* one of the management team for Altium, managing > North America and there is no plan to push support calls into China. > This is completely baseless. We have development in the US, China, > Kiev, Australia and the Netherlands. Altium Designer is not going > away and some development will continue to be done in our two > development centers in Australia (Sydney and Hobart). Do some > research on any major software vendor's global development centers and > you'll find development going on all over the world. No major EDA/CAD/ > CAM company that I know of has a single development center in any > single country. True there has been some retrenchment but the sky is > well and truly, *not* falling. As far as I'm concerned the sky started falling on on Altium when they abandoned PCad. But then I'm still bitter and twisted about that ;-) Nobby A happy PCad 2006 user.Article: 151491
Hi, for our new product we are using the chips in the subject of this post. The last time we ordered they had 4 weeks leadtime. Therefore we did not take any special precautions to procure chips. Now the next production run is due but nobody can tell us any ETA for these chips. I tried Avent, Digikey and a few brokers. Does anybody in this group have a few spare parts so that we can at least complete the most urgent orders? We need the -3 speedgrade because we have 1.25Gbps IO with 4x demux. Hopeful, Kolja cronologic.deArticle: 151492
On Apr 13, 7:29=A0am, Nobby Anderson <no...@invalid.invalid> wrote: > Blue Banshee <bluebans...@gmail.com> wrote: > > On Apr 10, 2:10 pm, Rikard Astrof <rikard.ast...@gmail.com> wrote: > >> On Apr 10, 8:00 pm, "PovTruffe" <PovTa...@gaga.invalid> wrote: > > >> > I have found this:http://www.electronicsnews.com.au/news/altium-relo= cates-from-sydney-t... > > >> > Nothing was mentioned about Altium Designer being discontinued > > >> If you read the comments at the end of the article, specially those > >> made by Alan Smith, he's saying last week they retrenched most of the > >> AU staff (http://www.google.com/search?q=3Dsite%3Alinkedin.com+altium > >> +limited) with the hope of setting up a team in China, well since then > >> inside sources have confirmed that the vast majority of the people > >> that were supposedly meant to head off to china to setup the new > >> offices/team have decided to take voluntary redundancy, which means as > >> of today Altium Limited and their product Altium designer is in limbo. > > >> To furthermore, lay-offs will be made this week in the San Diego > >> offices, this will include transfering all support calls to the China > >> offices. > > > What rubbish. =A0I *am* one of the management team for Altium, managing > > North America and there is no plan to push support calls into China. > > This is completely baseless. =A0We have development in the US, China, > > Kiev, Australia and the Netherlands. =A0Altium Designer is not going > > away and some development will continue to be done in our two > > development centers in Australia (Sydney and Hobart). =A0Do some > > research on any major software vendor's global development centers and > > you'll find development going on all over the world. =A0No major EDA/CA= D/ > > CAM company that I know of has a single development center in any > > single country. =A0True there has been some retrenchment but the sky is > > well and truly, *not* falling. > > As far as I'm concerned the sky started falling on on Altium when they > abandoned PCad. =A0But then I'm still bitter and twisted about that ;-) > > Nobby > A happy PCad 2006 user. Hi Nobby, Having come from the Accel / PCAD camp myself I can appreciate the sense of loyalty toward PCAD as it was and in some ways continues to be a fantastic product. Without steering this thread off course (and without making this a sales pitch) I know we've transitioned a number of customers (willingly) over the altium designer, with great success. It may be that we need to spend some time with you, hooking you and one of our support teams up to show you the differences and get you spun up on the benefits (this assumes of course that we haven't already :). A lot of work went into incorporating PCAD capabilities into AD. Again, I'm not going to try and sell you on it, but may be worth contacting our support centers (specific to each region) and get some face time with an Apps Engineer / the tool. Kind Regards, MattArticle: 151493
On Apr 12, 9:21=A0pm, Rob Gaddi <rga...@technologyhighland.com> wrote: > Hey all -- > > I've got a client breathing down my neck, asking if there's anything > that can be done to accelerate a design we're doing for them. =A0The one > piece of the design that's really going to take me some serious time is > figuring out the PCI Express nonsense. > > The target FPGA is an Arria II GX. =A0We plan to implement an 8 lane, Gen > 1 PCIe link over a cable back to the host processor. =A0I was thinking of > using the PLDA EZDMA core to try to speed the design up; they're hoping > to have this entire thing up and running in weeks. > > Does anyone have any experience with PCIe on the Arria II GX, either > with the help of someone's expensive IP core or just going straight into > Altera's hard IP block? > > Thanks, > Rob > > -- > Rob Gaddi, Highland Technology > Email address is currently out of order If you want it fast on Altera, consider using their SOPC builder tool. SOPC Builder is a GUI based system builder similar to Xilinx EDK. I have a design running on a Stratix IV GX which uses a 4 lane PCIe core (limited by my v9.1 software, their web site shows x8 available now for SOPC use) and an SG-DMA core both of which are available in SOPC builder. Also using SOPC builder, I converted my custom logic into a custom SOPC builder component. The speed advantage comes from having the SOPC builder tool generate all the system interconnect fabric (arbitration and burst logic, chip enables, intermediate addressing, etc.) rather than you having to write it. Obviously, the design isn't fully optimized since it is auto-generated but if you can live with the performance you get, it saves a lot of time. Note that there may be some other feature limitations in the stock SOPC Builder cores which aren't present if you write your entire design by hand, so check the doc's to make sure you're OK with them. For reference, in a previous design for a Virtex 5 which used a PCIe core, it took me an extra couple of months to create the PCIe core user interface logic by hand (I guess that also included my PCIe learning curve). Regards, SteveArticle: 151494
On 4/13/2011 9:51 AM, steve wrote: > On Apr 12, 9:21 pm, Rob Gaddi<rga...@technologyhighland.com> wrote: >> Hey all -- >> >> I've got a client breathing down my neck, asking if there's anything >> that can be done to accelerate a design we're doing for them. The one >> piece of the design that's really going to take me some serious time is >> figuring out the PCI Express nonsense. >> >> The target FPGA is an Arria II GX. We plan to implement an 8 lane, Gen >> 1 PCIe link over a cable back to the host processor. I was thinking of >> using the PLDA EZDMA core to try to speed the design up; they're hoping >> to have this entire thing up and running in weeks. >> >> Does anyone have any experience with PCIe on the Arria II GX, either >> with the help of someone's expensive IP core or just going straight into >> Altera's hard IP block? >> >> Thanks, >> Rob >> >> -- >> Rob Gaddi, Highland Technology >> Email address is currently out of order > > If you want it fast on Altera, consider using their SOPC builder > tool. > > SOPC Builder is a GUI based system builder similar to Xilinx EDK. I > have a design running on a Stratix IV GX which uses a 4 lane PCIe core > (limited by my v9.1 software, their web site shows x8 available now > for SOPC use) and an SG-DMA core both of which are available in SOPC > builder. Also using SOPC builder, I converted my custom logic into a > custom SOPC builder component. The speed advantage comes from having > the SOPC builder tool generate all the system interconnect fabric > (arbitration and burst logic, chip enables, intermediate addressing, > etc.) rather than you having to write it. Obviously, the design isn't > fully optimized since it is auto-generated but if you can live with > the performance you get, it saves a lot of time. > > Note that there may be some other feature limitations in the stock > SOPC Builder cores which aren't present if you write your entire > design by hand, so check the doc's to make sure you're OK with them. > For reference, in a previous design for a Virtex 5 which used a PCIe > core, it took me an extra couple of months to create the PCIe core > user interface logic by hand (I guess that also included my PCIe > learning curve). > > Regards, > Steve Love to, but the PCIe core at 8 lanes on the Arria II GX only supports Avalon-ST, not the Avalon-MM that you need to be able to plug into SOPC Builder. Practically every other configuration is available, but that one little corner is SOL. I originally thought about writing my own bridge to go from the transaction layer packets that come across the ST interface to an SOPC master, but non-aligned accesses can generate some very uncomfortable packets that would probably take me a week and change to get thought out, written, and tested. -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 151495
On 04/12/2011 04:04 PM, glen herrmannsfeldt wrote: > I am not sure about it, but I believe that as you make the gate > faster, which means that the driving transistor comes on sooner, > it gets worse. Again, optimize for speed or power. I can imagine a structure where you have a separate driver for each of the output pad transistors. Then, you could design each of those drivers to have just the right delay on each edge to assure almost break-before-make operation of the final transistors. This might make more sense in the case of simple logic parts like single gates and FFs than in an FPGA. JonArticle: 151496
Kolja Sulimma <ksulimma@googlemail.com> wrote: > Hi, > for our new product we are using the chips in the subject of this > post. The last time we ordered they had 4 weeks leadtime. Therefore we > did not take any special precautions to procure chips. > Now the next production run is due but nobody can tell us any ETA for > these chips. > I tried Avent, Digikey and a few brokers. > Does anybody in this group have a few spare parts so that we can at > least complete the most urgent orders? > We need the -3 speedgrade because we have 1.25Gbps IO with 4x demux. Digikey online expects the next delivery May 11... -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 151497
slight_return <matthew.berggren@altium.com> wrote: > On Apr 13, 7:29 am, Nobby Anderson <no...@invalid.invalid> wrote: >> As far as I'm concerned the sky started falling on on Altium when they >> abandoned PCad. But then I'm still bitter and twisted about that ;-) >> >> Nobby >> A happy PCad 2006 user. > > Hi Nobby, > > Having come from the Accel / PCAD camp myself I can appreciate the > sense of loyalty toward PCAD as it was and in some ways continues to > be a fantastic product. Without steering this thread off course (and > without making this a sales pitch) I know we've transitioned a number > of customers (willingly) over the altium designer, with great > success. It may be that we need to spend some time with you, hooking > you and one of our support teams up to show you the differences and > get you spun up on the benefits (this assumes of course that we > haven't already :). A lot of work went into incorporating PCAD > capabilities into AD. Again, I'm not going to try and sell you on it, > but may be worth contacting our support centers (specific to each > region) and get some face time with an Apps Engineer / the tool. WE (well, I, as it's mostly me that uses it) did make the transition to AD and we used it for a few PCBs at the time, and I still have one of the 2009 releases installed here (summer I think). However for our purposes (for our purposes being important) it was not great for a number of reasons. Firstly, it tried to be all things for all men, integrating schematic capture, PCB layout, FPGA design, software and even I think mechanical design in the later releases (can't remember) and it did none of the things we used PCad for well. I only want schemtic capture and PCB layout, ie exactly what PCad does, and nothing else. The PCB layout capabilities of PCad were, even by summer 2009, far in advance of anything that AD could do, it was just so much easier to use (all manual routing here, nothing particularly complicated, max 4 tracked layer double sided designs). Schematic capture wasn't as smooth, either, but there was nothing much in it between the two in real terms. ADs library, and some of the more advanced features of component building was better than PCad's once you got used to it though. However we just didn't use or want 80% of the features of the package. We have no need for FPGA integration, the FPGA tools we use from the manufacturer do everything we want them to do. We don't want source code integration, our coding methedologies are way ahead of anything it actually supported and anyway it's the wrong platform. I don't need thermal analysis or mechanical integration or any of the other bells and whistles. PCad does absolutely everything we need, and by 2006 SP2 it did it well. As others have said, we also don't want to pay through the nose for a constant stream of updates, either. It was just far too expensive, and the perception (and I think the reality) is that we were paying for ongoing development of a product 80% of which we would never ever use. So, we paid it for a year or two, and then happily returned to PCad and free forever use. PCad was a good, mid-range product, and in my view there is still a need for something like that, particularly for small outfits like ours. We might consider AD again, assuming it really has improved, if we could get a version that did schematic capture and PCB layout only (and other necessary things like library management of course) but it would have to be significantly less expensive both in terms of startup cost and ongoing costs than AD is at the moment (or was in the 2009 era anyway). NobbyArticle: 151498
Kolja At the moment many chip vendors are still assessing the impact of the unfortunate events in Japan. There some components of chip packages and even raw silicon wafer supplies that are likely to be affected over the next few months. Most vendors seem to have extended leadtimes at the very least and in some cases are effectively not even giving leadtimes. From what I understand Xilinx are working on an impact plan of how this affects their customers and advising them accordingly. I believe existing orders will get priority which a reasonable thing to do. New orders will be probably be bottom of the priority list. John Adair Enterpoint Ltd. On Apr 13, 4:03=A0pm, Kolja Sulimma <ksuli...@googlemail.com> wrote: > Hi, > > for our new product we are using the chips in the subject of this > post. The last time we ordered they had 4 weeks leadtime. Therefore we > did not take any special precautions to procure chips. > > Now the next production run is due but nobody can tell us any ETA for > these chips. > I tried Avent, Digikey and a few brokers. > > Does anybody in this group have a few spare parts so that we can at > least complete the most urgent orders? > > We need the -3 speedgrade because we have 1.25Gbps IO with 4x demux. > > Hopeful, > > Kolja > cronologic.deArticle: 151499
John Adair <g1@enterpoint.co.uk> wrote: > Kolja > At the moment many chip vendors are still assessing the impact of the > unfortunate events in Japan. There some components of chip packages > and even raw silicon wafer supplies that are likely to be affected > over the next few months. Most vendors seem to have extended leadtimes > at the very least and in some cases are effectively not even giving > leadtimes. From what I understand Xilinx are working on an impact plan > of how this affects their customers and advising them accordingly. I > believe existing orders will get priority which a reasonable thing to > do. New orders will be probably be bottom of the priority list. Some more explanations: http://www.xilinx.com/support/documentation/customer_notices/xcn11017.pdf -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z