Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I have found this: http://www.electronicsnews.com.au/news/altium-relocates-from-sydney-to-shanghai Nothing was mentioned about Altium Designer being discontinuedArticle: 151451
On Apr 10, 7:17=A0pm, Frank Buss <f...@frank-buss.de> wrote: > Rikard Astrof wrote: > > FYI: =A0Altium limited has laid off 60% of their staff and will be > > closing the australia offices by the end of the month. > > April 1 is over :-) Do you have a link? Last time I tried Altium > Designer it was a nice program. Of course expensive and with some bugs, > but would be bad for many people who are using it if it will be > discontinued. > This is not an April fools comment, this is real. 3 out of 4 of the main architects Marc Depret, Dejan Stankovic and Benjamin Wells will not be making the move as such have tendered their resignation, but have been requested by Altium management to not make it public until after the June EOY ASX fillings have been made. The reality is there is no guarantee that Altium as a company will be around this time next year, so who knows what will happen to multi- year subscriptions and promised upgrade cycles.Article: 151452
On Apr 10, 8:00=A0pm, "PovTruffe" <PovTa...@gaga.invalid> wrote: > I have found this:http://www.electronicsnews.com.au/news/altium-relocates= -from-sydney-t... > > Nothing was mentioned about Altium Designer being discontinued If you read the comments at the end of the article, specially those made by Alan Smith, he's saying last week they retrenched most of the AU staff (http://www.google.com/search?q=3Dsite%3Alinkedin.com+altium +limited) with the hope of setting up a team in China, well since then inside sources have confirmed that the vast majority of the people that were supposedly meant to head off to china to setup the new offices/team have decided to take voluntary redundancy, which means as of today Altium Limited and their product Altium designer is in limbo. To furthermore, lay-offs will be made this week in the San Diego offices, this will include transfering all support calls to the China offices.Article: 151453
See if you can develop a way to script the whole thing. A lot of tools will= let you write a script to build up a pinout for a schematic or PCB symbol = from something like a spreadsheet file. You can also typically use the same= file as a starting point for the FPGA IO definitions. Companies that do a lot of this generally have three things going for them:= they've taken the time to develop good scripted/automated workflows for te= dious tasks; once a part is in their library it gets tons of reuse; any boa= rd developed with a particular set of IO constraints will usually get a rea= sonable lifespan so any changes are infrequent. ChrisArticle: 151454
On 8.4.2011 19:47, Mr.CRC wrote: > Any ideas on how this is done in the real world would be of interest. In the high end Schematics/PCB packages there are specific tools to do pin planning, pin swapping etc. For example Mentor IO-designer is one such tool. It can for example read in the constraint files from FPGA tools and produce the symbol from that data, and if pins are swapped it can export the new constraints etc. (there are also many other ways to do the symbol in the tools). There are also tools to check all the PCB design constraints and manage that data (length rules, differential pair rules etc.) --KimArticle: 151455
Hello, All. I want to use a GTP transceiver with oversapling mode on for receiving a 125Mbit/s signal. But I don't know, what frequency will be in rxrecclk, if I use 10bit bus and builtin oversampling - 12.5MHz or 62.5MHz or something else. Has anyone worked with builtin oversampling?Article: 151456
On Apr 9, 1:23=A0am, Benjamin Couillard <benjamin.couill...@gmail.com> wrote: > On 8 avr, 04:19, "maxascent" > > > > > > > > > > <maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk> wrote: > > >Hi everyone, > > > >I was wondering if anyone here can suggest me a good reference book on > > >PCI express design. I've seen a few on amazon but before buying I'd > > >like to hear your suggestions. > > > >Thanks > > > PCI Express System Architecture is a good book. Its easy to follow and > > covers everything you would need. Google books have previews of this bo= ok > > and others. > > > Jon =A0 =A0 =A0 =A0 > > > --------------------------------------- =A0 =A0 =A0 =A0 > > Posted throughhttp://www.FPGARelated.com > > Thank you! I think i'll buy this one! Hi, If you are using Xilinx PCIE express block, then I think you can even do with the PCIExpress System Architecture ebook which is free. Some of the sections have been skipped in the ebook version. Basically you will only be developing the transaction layer interface if you are using a xilinx pcie endpoint IP. Basically the data link layer and the physical interface are digital and analog interfaces which are normally hard blocks. The user logic falls mostly into transaction layer and thats the one you should be worried about! Thanks ShyamArticle: 151457
"Mr.CRC" <crobcBOGUS@REMOVETHISsbcglobal.net> writes: > Hi: > > I've been developing a DSP+FPGA engine laboratory experiment controller > for some years. This summer I have a EE intern coming to help me with > hardware and logic development to push toward finishing things. > > > Some things we need to do: > > 1. Make Xilinx Spartan 3E PCB CAD model (most likely for Eagle). My PCB tool generates BGA footprints in seconds. Or you can buy a wizard from PCB Matrix (now owned by Mentor, but AFAIK they still cover lots of PCB tools): http://www.mentor.com/products/pcb-system-design/pcbmatrix > 2. Make Eagle model for TI TMS320F2812 DSP. Ditto. > 3. Top level Verilog module to represent all FPGA IOs used and routing > them to sub-modules. That can be a bit tedious, although if the signal names map 1:1 (or nearly) Emacs does a good job of the really tedious bit. If you've already got the schematics drawn, some tools can export a VHDL wiring file which might help. > 4. Begin developing some sub modules for various functionality. > That's the interesting part :) > > Steps 1, 2, and 3 seem like extremely tedious processes to perform by > hand, especially the PCB models, since there are 176 pins on the DSP and > may be 200-300 pins on the FPGA depending on the packages we choose. > > Also, the system plan is to route nearly all DSP IO and memory interface > pins to the FPGA, so that the FPGA may be used to reconfigure at any > time what specialty DSP IOs appear to the user via a buffered set of BNC > connectors. Thus, we will actually use at least >100 FPGA IOs, all > which therefore must be coded into the top level Verilog module. That sounds like a fun challenge, are you really planning to buffer the EMIF onto BNC connectors? > > What's further boggles my mind is that this is still a relatively simple > system, compared to the high end FPGAs and CPUs which may involve >1000 > poins each. > > How is this managed efficiently? Employ grunts? Or should I be looking > at the scripting language in Eagle for ex. to attempt to automate the > SMD pad placements, at least? Is there a scripting process which can > assist this on the Xilinx/Verilog side? > Last big design I did was a 676 pin BGA to 4 memory chips, 50 or so to other on board compoments and 150 or so to expansion boards. I used MIG to assign the memory interfaces, import those to PlanAhead to assign the rest of the pins. That then exports a CSV, and I have a script which then converts that into a format that my schematic capture tool can use, so I don't have to manually keep that in sync. I also export a UCF file for the Xilinx implementation tools. the pins > Much of this seems difficult to envision how to automate because it is > mainly primary data entry, ie., transcribing signal names from the > system design and datasheets to pin names in PCB schematic symbols and > to FPGA constraint files, which can't be automated. > It can if you have the right tools - I can't comment on whether Eagle can do this. > If anything, it might be possible to develop a central file of signal > names, pad locations, etc., and have scripts generate the PCB models, > Verilog top level module, and constraint file. That way the data entry > only needs to be done once. But will the scripting development be just > as time consuming as typing everything 3 times? > It depends how dynamic your pinout is. If you're sure it's right, and won't need to to it again, just sitting down and typing it all can be a better way. But I found it really useful to be able to tweak the pinout of the FPGA late in the PCB design process to ease the routing and ripple that through the deswign from one source. And once you've got a flow that works, your next designs can make good use of that investment. HTH! Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.co.uk/capabilities/39-electronic-hardwareArticle: 151458
I'm posting as an AD user, concerned for the long term future of the tool. > This is not an April fools comment, this is real. 3 out of 4 of the > main architects Marc Depret, Dejan Stankovic and Benjamin Wells will > not be making the move as such have tendered their resignation, but > have been requested by Altium management to not make it public until > after the June EOY ASX fillings have been made. > The reality is there is no guarantee that Altium as a company will be > around this time next year, so who knows what will happen to multi- > year subscriptions and promised upgrade cycles. This _could_ be interpreted as FUD by a competitor. Rikard, who are you and what's your source for the detailed information above? NialArticle: 151459
TTA-Based Co-design Environment (TCE) is a toolset for designing application-specific processors (ASP) based on the Transport Triggered Architecture (TTA). The toolset provides a complete retargetable co-design flow from C programs down to synthesizable VHDL and parallel program binaries. Processor customization points include the register files, function units, supported operations, and the interconnection network. This release includes support for LLVM 2.9, some new VHDL implementations (an FPU and streaming operations), a connectivity optimizer, code generation improvements, plenty of bug fixes and more. See the CHANGES file for a more thorough listing. Notable new features -------------------- - Support for LLVM 2.9. - OpenCL Embedded compliant FPU implementations by Timo Viitanen / TUT - Generic VHDL implementations for the basic streaming operations from Jani Boutellier / University of Oulu. - ConnectionSweeper IC network exploration algorithm. Optimizes the IC network by sweeping the buses of the machine and removing the least important connections first until a cycle count worsening threshold is reached. Tries to remove RF connections first as they are usually more expensive than the bypass connections. - Added --pareto_set switch to the explorer for printing pareto efficient configurations. Currently supports the connectivity and cycle count as the quality metrics. - proge: IP-XACT support updated to version 1.5 - Added switch --print-resource-constraints to tcecc to assist in deciding which resources to add to the machine to improve the schedule. Dumps DDGs to dot files along with dependence and resource constraint analysis data. Code generator improvements --------------------------- - Passes the first function parameter in register instead of stack. - Uses negative guard more aggressively, less stupid guard xoring operations. - Emulation pattern generation improved, can use immediates directly when using DAG to emulate missing operations. - Some other minor pattern improvements leading to slightly better code on some situations. - Alias analysis improvements, understands that register spills to stack cannot alias with other memory operations - Software Bypasser is much more aggressive. Acknowledgements ---------------- Thanks to Timo Viitanen for your first TCE contribution (the OpenCL embedded-compliant FPU implementations) in this release! Links ----- TCE home page: http://tce.cs.tut.fi This announcement: http://tce.cs.tut.fi/downloads/ANNOUNCEMENT Download: http://tce.cs.tut.fi/downloads Change log: http://tce.cs.tut.fi/downloads/CHANGESArticle: 151460
Hi everyone, I'm trying to simulate my EDK design that contains a PCI-express block connected to a PLB bus. In the PCI express block it is mentionned that "The MPLB_Rst and SPLB_Rst must be asserted simultaneously and held for a minimum of 300 ns". I connected the reset as suggested in the plbv46_pcie datasheet, however myst resets only last about 85 ns. It doesn't matter how long my reset input is since the block "proc_sys_reset_0" seems to be edge sensitive and only triggers the PLB_reset when my external reset goes from 0 to 1. Is there a way to force "proc_sys_reset" to generate resets of 300 ns or more? Best regardsArticle: 151461
How small is it when installed? Can it be made smaller at the install phase? Just need MAX II and VHDL and maybe cyclone later.Article: 151462
In considering the nature of power consumption in FPGA devices it occurred to me to ask what components in the FPGA are responsible for most of the power. The candidates are clock trees, routing, LUT and misc logic and finally, FFs. In CMOS devices the power consumed comes from charging and discharging capacitance. So I would expect the clock trees with their constant toggling to be a likely candidate for the most power consumption. Second on my list is the routing since I would expect the capacitance to be significant. I expect the LUTs to be next but may be fairly close to the power consumed in the FFs. With that in mind, I think the typical way of reducing power by the use of clock enables on registers which are on the output of logic block may not be optimal. This is the part that I have not fully analyzed, but I think it could be significant. When a register on the output of a logic block is enabled, routing and logic feeding the register inputs will have dissipated power but after the clock, routing and logic fed by the output will also dissipate power regardless whether the next register will be enabled on the next clock or not! In other words, the routing and logic can dissipate power just because the inputs to the logic are changing even when that logic is not needed. If the registers are placed at the input to a function block the routing and logic will only dissipate power when the registers are enabled allowing the register outputs and the logic inputs to change. Why is this different from output registers? If your design is a linear pipeline then it is not different. But that is the exception. With branching and looping of logic flow an output can feed multiple other logic blocks. If multiple inputs to logic change at different times this will also increase dissipation. When those other logic blocks do not need this new data the power used in the routing and logic is wasted. I guess the part I'm unclear on is whether this is truly significant in a typical design. If the branching is not a large part of a design or if the branching is only in the control logic and not the data paths I would expect the difference to be small or negligible. I don't think I am the first person to think of this. Since this is not a part of vendors recommendations I think it is a pretty good indicator that it is not a large enough factor to be useful. Has anyone seen an analysis on this? RickArticle: 151463
On Apr 11, 11:10=A0pm, "Nial Stewart" <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > I'm posting as an AD user, concerned for the long term future of the tool= . > > > This is not an April fools comment, this is real. 3 out of 4 of the > > main architects Marc Depret, Dejan Stankovic and Benjamin Wells will > > not be making the move as such have tendered their resignation, =A0but > > have been requested by Altium management to not make it public until > > after the June EOY ASX fillings have been made. > > The reality is there is no guarantee that Altium as a company will be > > around this time next year, so who knows what will happen to multi- > > year subscriptions and promised upgrade cycles. > > This _could_ be interpreted as FUD by a competitor. > > Rikard, who are you and what's your source for the detailed information > above? > > Nial Hi Nial. Not sure if this is FUD by a competitor, but it certainly is ill-informed comment. I work at Altium, currently in the Australian office, and can assure everyone that Altium Designer is not being discontinued in ANY way. The move of HQ to China (we will remain an Australian company) is actually aimed at allowing us to ramp up our development schedule, and to get Altium Designer on to the desktops of more designers and engineers around the world. So, yes. Perhaps Rikard needs to declare who he is and who he works for. Otherwise his comments can be disregarded.Article: 151464
rickman <gnuarm@gmail.com> wrote: > In considering the nature of power consumption in FPGA devices it > occurred to me to ask what components in the FPGA are responsible for > most of the power. The candidates are clock trees, routing, LUT and > misc logic and finally, FFs. > In CMOS devices the power consumed comes from charging and discharging > capacitance. So I would expect the clock trees with their constant > toggling to be a likely candidate for the most power consumption. > Second on my list is the routing since I would expect the capacitance > to be significant. I expect the LUTs to be next but may be fairly > close to the power consumed in the FFs. When I first knew about CMOS, it was the transition time when both transistors were on that was the most significant power drain, but then logic was slower. As I understand it, in the faster, smaller devices, it is not tunneling current that is a large fraction of the power. > With that in mind, I think the typical way of reducing power by the > use of clock enables on registers which are on the output of logic > block may not be optimal. This is the part that I have not fully > analyzed, but I think it could be significant. The complication in the capacitance calculation is that the metal widths can change a lot. The input to a buffer can be narrow, and drive a wider output line. Most FPGAs now include many buffers in the routing, where routing lines used to be passive. (There are no internal tristate lines, though the tools will still simulate them.) > When a register on the output of a logic block is enabled, routing and > logic feeding the register inputs will have dissipated power but after > the clock, routing and logic fed by the output will also dissipate > power regardless whether the next register will be enabled on the next > clock or not! In other words, the routing and logic can dissipate > power just because the inputs to the logic are changing even when that > logic is not needed. Logic design is hard enough without trying to worry about every last bit of power. I suppose for designs that specifically need to last, such as digital watches, one should worry about it. > If the registers are placed at the input to a function block the > routing and logic will only dissipate power when the registers are > enabled allowing the register outputs and the logic inputs to change. > Why is this different from output registers? If your design is a > linear pipeline then it is not different. But that is the exception. I happen to like linear pipelines, but, yes, that is rare. > With branching and looping of logic flow an output can feed multiple > other logic blocks. If multiple inputs to logic change at different > times this will also increase dissipation. When those other logic > blocks do not need this new data the power used in the routing and > logic is wasted. There are stories about the Cray-1, and how many lines were carefully measured to be the same length, such that signals would arrive at the right time. (That was ECL, so I don't believe that power was the reason.) > I guess the part I'm unclear on is whether this is truly significant > in a typical design. If the branching is not a large part of a design > or if the branching is only in the control logic and not the data > paths I would expect the difference to be small or negligible. Well, there isn't that much you can do about it. > I don't think I am the first person to think of this. Since this is > not a part of vendors recommendations I think it is a pretty good > indicator that it is not a large enough factor to be useful. Has > anyone seen an analysis on this? -- glenArticle: 151465
On Apr 11, 6:57=A0am, Rikard Astrof <rikard.ast...@gmail.com> wrote: > On Apr 10, 7:17=A0pm, Frank Buss <f...@frank-buss.de> wrote: > > > Rikard Astrof wrote: > > > FYI: =A0Altium limited has laid off 60% of their staff and will be > > > closing the australia offices by the end of the month. > > > April 1 is over :-) Do you have a link? Last time I tried Altium > > Designer it was a nice program. Of course expensive and with some bugs, > > but would be bad for many people who are using it if it will be > > discontinued. > > This is not an April fools comment, this is real. 3 out of 4 of the > main architects Marc Depret, Dejan Stankovic and Benjamin Wells will > not be making the move as such have tendered their resignation, =A0but > have been requested by Altium management to not make it public until > after the June EOY ASX fillings have been made. And your source of this is? One of those names has left, yes, that's fairly well known, but he decided to move to another company way before the whole China move was even known, it had nothing to do with China. The other two names are incorrect, unless you know something that myself (ex) and other employees don't. And AD being discontinued is just plain rubbish, as is the Australian office closing. So not a single thing you have posted is true. But considering that based on your profile, your ONLY contributions to this and other Usenet groups has been to post misinformation about Altium, I call troll, or FUD stooge. Dave.Article: 151466
On Apr 11, 4:18=A0pm, Rob Irwin <rob.ir...@altium.com> wrote: > On Apr 11, 11:10=A0pm, "Nial Stewart" > > > > > > <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > > I'm posting as an AD user, concerned for the long term future of the to= ol. > > > > This is not an April fools comment, this is real. 3 out of 4 of the > > > main architects Marc Depret, Dejan Stankovic and Benjamin Wells will > > > not be making the move as such have tendered their resignation, =A0bu= t > > > have been requested by Altium management to not make it public until > > > after the June EOY ASX fillings have been made. > > > The reality is there is no guarantee that Altium as a company will be > > > around this time next year, so who knows what will happen to multi- > > > year subscriptions and promised upgrade cycles. > > > This _could_ be interpreted as FUD by a competitor. > > > Rikard, who are you and what's your source for the detailed information > > above? > > > Nial > > Hi Nial. Not sure if this is FUD by a competitor, but it certainly is > ill-informed comment. I work at Altium, currently in the Australian > office, and can assure everyone that Altium Designer is not being > discontinued in ANY way. The move of HQ to China (we will remain an > Australian company) is actually aimed at allowing us to ramp up our > development schedule, and to get Altium Designer on to the desktops of > more designers and engineers around the world. > > So, yes. Perhaps Rikard needs to declare who he is and who he works > for. Otherwise his comments can be disregarded.- Hide quoted text - > > - Show quoted text - The post came from a DSL line in New South Wales, Sydney. Maybe one of the engineers that was laid off? It seems strange to lay off most of the staff before ramping in the new location first. Ed McGettigan -- Xilinx Inc.Article: 151467
On Apr 11, 7:10=A0am, Rikard Astrof <rikard.ast...@gmail.com> wrote: > On Apr 10, 8:00=A0pm, "PovTruffe" <PovTa...@gaga.invalid> wrote: > > > I have found this:http://www.electronicsnews.com.au/news/altium-relocat= es-from-sydney-t... > > > Nothing was mentioned about Altium Designer being discontinued > > If you read the comments at the end of the article, specially those > made by Alan Smith, he's saying last week they retrenched most of the > AU staff (http://www.google.com/search?q=3Dsite%3Alinkedin.com+altium > +limited) with the hope of setting up a team in China, well since then > inside sources have confirmed that the vast majority of the people > that were supposedly meant to head off to china to setup the new > offices/team have decided to take voluntary redundancy, which means as > of today Altium Limited and their product Altium designer is in limbo. Rubbish. There were no voluntary redundancies, and the product and company is not in limbo. > To furthermore, lay-offs will be made this week in the San Diego > offices, this will include transfering all support calls to the China > offices. More rubbish. So who are you, and who do you work for? Dave.Article: 151468
On Apr 12, 1:32=A0pm, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote: > On Apr 11, 4:18=A0pm, Rob Irwin <rob.ir...@altium.com> wrote: > > > > > On Apr 11, 11:10=A0pm, "Nial Stewart" > > > <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > > > I'm posting as an AD user, concerned for the long term future of the = tool. > > > > > This is not an April fools comment, this is real. 3 out of 4 of the > > > > main architects Marc Depret, Dejan Stankovic and Benjamin Wells wil= l > > > > not be making the move as such have tendered their resignation, =A0= but > > > > have been requested by Altium management to not make it public unti= l > > > > after the June EOY ASX fillings have been made. > > > > The reality is there is no guarantee that Altium as a company will = be > > > > around this time next year, so who knows what will happen to multi- > > > > year subscriptions and promised upgrade cycles. > > > > This _could_ be interpreted as FUD by a competitor. > > > > Rikard, who are you and what's your source for the detailed informati= on > > > above? > > > > Nial > > > Hi Nial. Not sure if this is FUD by a competitor, but it certainly is > > ill-informed comment. I work at Altium, currently in the Australian > > office, and can assure everyone that Altium Designer is not being > > discontinued in ANY way. The move of HQ to China (we will remain an > > Australian company) is actually aimed at allowing us to ramp up our > > development schedule, and to get Altium Designer on to the desktops of > > more designers and engineers around the world. > > > So, yes. Perhaps Rikard needs to declare who he is and who he works > > for. Otherwise his comments can be disregarded.- Hide quoted text - > > > - Show quoted text - > > The post came from a DSL line in New South Wales, Sydney. Yes, the message traces to the Hornsby area in Sydney, via a TGP Internet account. Dave.Article: 151469
On Apr 11, 9:16=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > rickman <gnu...@gmail.com> wrote: > > > I guess the part I'm unclear on is whether this is truly significant > > in a typical design. =A0If the branching is not a large part of a desig= n > > or if the branching is only in the control logic and not the data > > paths I would expect the difference to be small or negligible. > > Well, there isn't that much you can do about it. =A0 I'm not sure what you mean. The point is that is you need to reduce power in your design and it has certain features, this may be a useful technique for reducing the power. I would like to work with the Silicon Blue devices to measure some power figures for a variety of designs I've done. When I get to that I may try this technique and see if it gives useful results. RickArticle: 151470
On 12 Apr., 00:24, rickman <gnu...@gmail.com> wrote: > In considering the nature of power consumption in FPGA devices it > occurred to me to ask what components in the FPGA are responsible for > most of the power. =A0The candidates are clock trees, routing, LUT and > misc logic and finally, FFs. > > In CMOS devices the power consumed comes from charging and discharging > capacitance. =A0So I would expect the clock trees with their constant > toggling to be a likely candidate for the most power consumption. > Second on my list is the routing since I would expect the capacitance > to be significant. =A0I expect the LUTs to be next but may be fairly > close to the power consumed in the FFs. > > With that in mind, I think the typical way of reducing power by the > use of clock enables on registers which are on the output of logic > block may not be optimal. =A0This is the part that I have not fully > analyzed, but I think it could be significant. > > When a register on the output of a logic block is enabled, routing and > logic feeding the register inputs will have dissipated power but after > the clock, routing and logic fed by the output will also dissipate > power regardless whether the next register will be enabled on the next > clock or not! =A0In other words, the routing and logic can dissipate > power just because the inputs to the logic are changing even when that > logic is not needed. > > If the registers are placed at the input to a function block the > routing and logic will only dissipate power when the registers are > enabled allowing the register outputs and the logic inputs to change. > Why is this different from output registers? =A0If your design is a > linear pipeline then it is not different. =A0But that is the exception. > With branching and looping of logic flow an output can feed multiple > other logic blocks. =A0If multiple inputs to logic change at different > times this will also increase dissipation. =A0When those other logic > blocks do not need this new data the power used in the routing and > logic is wasted. > > I guess the part I'm unclear on is whether this is truly significant > in a typical design. =A0If the branching is not a large part of a design > or if the branching is only in the control logic and not the data > paths I would expect the difference to be small or negligible. > > I don't think I am the first person to think of this. =A0Since this is > not a part of vendors recommendations I think it is a pretty good > indicator that it is not a large enough factor to be useful. =A0Has > anyone seen an analysis on this? > > Rick Hi Rick, have you taken a look at Xilinx Xpower Analyzer? For a given design it calculates the power consumption with regard to many variables. It's interesting to see the impact of these variables on the power consumption. Regardles of the brand, the tendency should be similar for other FPGAs too. And maybe other companies have similar tools too. Have a nice synthesis EilertArticle: 151471
In article <48a5df08-d39a-4c4e-bb4c-99b37bc624b1@l18g2000yqm.googlegroups.com>, rickman <gnuarm@gmail.com> writes: >In CMOS devices the power consumed comes from charging and discharging >capacitance. That was true in the old days. With modern (really) thin oxide, you have to consider leakage currents. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 151472
rickman <gnuarm@gmail.com> wrote: > On Apr 11, 9:16 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: >> rickman <gnu...@gmail.com> wrote: >> > I guess the part I'm unclear on is whether this is truly significant >> > in a typical design. If the branching is not a large part of a design >> > or if the branching is only in the control logic and not the data >> > paths I would expect the difference to be small or negligible. >> Well, there isn't that much you can do about it. > I'm not sure what you mean. The point is that is you need to reduce > power in your design and it has certain features, this may be a useful > technique for reducing the power. Say you have an XOR gate where the two inputs come from different FF's clocked on the same clock, but through different paths. If you can make those two paths the same length, then there won't be extra transitions. Now, with ASIC logic you get a lot of control over the wiring, and could work to get path lengths equal. With FPGAs, you don't have so much control. Even more, the routing paths are often buffered, even though you don't see them. But often the circuits are designed for high-speed, and very rarely low power. I do remember the 74L series TTL, slower and lower power. If you make the metal traces narrower (reduce capacitance) you increase the resistance, slowing down the signal. You can trade speed for power in many ways. > I would like to work with the > Silicon Blue devices to measure some power figures for a variety of > designs I've done. When I get to that I may try this technique and > see if it gives useful results. So they give you more control than the usual FPGA tools? -- glenArticle: 151473
On Apr 12, 9:50=A0am, rickman <gnu...@gmail.com> wrote: > On Apr 11, 9:16=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > > rickman <gnu...@gmail.com> wrote: > > > > I guess the part I'm unclear on is whether this is truly significant > > > in a typical design. =A0If the branching is not a large part of a des= ign > > > or if the branching is only in the control logic and not the data > > > paths I would expect the difference to be small or negligible. > > > Well, there isn't that much you can do about it. =A0 > > I'm not sure what you mean. =A0The point is that is you need to reduce > power in your design and it has certain features, this may be a useful > technique for reducing the power. =A0I would like to work with the > Silicon Blue devices to measure some power figures for a variety of > designs I've done. =A0When I get to that I may try this technique and > see if it gives useful results. > > Rick Hi Rick, As a regular subsriber of the comp arch group and an interested engineer in reconfigurable hardware, I started reading your post. Incidently, I read about SiliconBlue FPGAs and gives me a great sense of happiness for I work for the IP development on SiliconBlue. Primarily, the clock gating is very much the prominent approach to solve dynamic powers. Operating on a bank wise basis is yet another. Banks supporting multiple voltage are a feature in SiliconBlue that I feel should be exploited. For instance once can always run a mDDR bank at 1.8 and an SD bank at 3.3 and you get substantial reduction power. Using Clock Enables are very tricky though I think they can do much as far as power reduction is concerned. But this most times is architecture specific. For instance in SiliconBlue, if you are having a clock enable on a register, and if that register is implemented in a logic block, then the clock enable is the same for other registers in the logic block and if the design does not have other registers having the same clock enable, then they are mapped to other logic blocks which is primarily wastage of resources and additionally increases dynamic power because there are additional switches used to route outputs from the other logic block I have been trying to measure the dynamic power with some design of mine, but I couldn't get the real setup with software to keep track of the power consumption on a real time (rendering numbers using software)Article: 151474
Your thoughts about where the power goes seem mostly correct. My experience is the following. You can't do much about static power, except by playing with the external voltages. Then there is I/O consumption, like driving your outputs to the PCB, and hopefully no floating input pins. Sometimes you can tune those things. But once this is done, you have to live with the hardware. In software, the ONE major cause for current consumption is toggling interconnect. Each route in reality consists of buffers, tracks, receivers, etc. But the most important thing you can do to reduce the consumption of the route is: a) make the route "shorter", and/or b) make it toggle less often. Little else can be done, and therefore it's usually not necessary to go into the details about what elements physically make up the route. Clocks have a high togglerate, and high fanout converts it into lots of routes used. Therefore the clock tree obviously is one of the major consumers (if not the top one). Unfortunately the clock tree is also one of the most difficult things to tune or get rid off. Slow clocks (instead of clock enables) for the slow portions of your design are probably the most fruitful thing to do with regards to clock. Data routes however, can often be influenced at the HDL level (by restructuring the design). Look at your design in the floorplanner and FPGA editor, and at the CLB description in the datasheet. Figure out a good way of mapping the functionality onto the available hardware. Express that in your HDL, and the tools will (to some degree) follow you through without explicit floorplanning or vendor specific primitives. The only other thing worth mentioning is LUT flicker. The different input signals each have distinct arrival times (steming from their individual route delay). Each time an input arrives, the LUT output may toggle (depending on terms). If the LUT drives a "long" route, this causes high consumption. Several approaches can be used to reduce it. A simple one is to use few logic levels. Registering after LUT is usually done in the same slice, thus a very short route with little consumption during flicker. It has a good net result for dense data with high togglerate, despite the bigger clock tree. Obvously, the opposite can be true for slow data. Best regards, Marc
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z