Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"lusch" <lukaszschodowski@gmail.com> wrote in message news:9b933786-e1b2-4648-bdd4-1df24785efcd@33g2000yqj.googlegroups.com... > Yes, the problem was in RS232 :) Actually I don't know why. > I wrote simple module to write data to RS232 from FPGA and problem > appeared when I'd like send "00" A haaa, so the 1 Pin Interface would have been invaluable, how many hours did you waste? (Without giving it too hard a sell). At least the problem isn't FPGA or PCB based! Nial.Article: 146301
> Well kiss my grits! It seems Lattice licensees are second class > citizens and the accelerated waveform viewer is not available. No > wonder I didn't know about it. The docs say the accelerated viewer is > the default! Rick, is that an OEM version you're using, it's fairly standard practice to have reduced functionality in OEM versions. Otherwise there would be no incentive to shell out for the full version. (Although I'm sure you knew that). Nial.Article: 146302
Peter Alfke <alfke@sbcglobal.net> writes: > Use Xilinx FPGAs. I know for sure that Virtex 5, and probably also 4 > and 6 can configure each flip-flop as a latch, although this is a > rarely-used feature. > (How fast I forget such exotic facts). And the Microblaze core uses 3 (yes a whole 3 :) of them as well! Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 146303
On Mar 11, 1:45=A0am, Kim Enkovaara <kim.enkova...@iki.fi> wrote: > > big accuounts have no problem with registration, NDAs etc. > > --Kim That's a false assumption. Even big accounts have individual engineers who thrive on free flow of information to <b>understand the technologies to consider in the first place</b>. There is a tendency not to look into every company proclaiming new and revolutionary technology if the information isn't readily available. Perhaps some engineers are interested in spending the time and hassle to dig into all the little wannabe startups but there are few that warrant the all-out risk associated with a new single-source supplier with no guaranteed future unless the technology is truly superb. If the risk is extreme, why look at it in the first place unless it's a casual 15 minutes of web perusal to understand the claims?Article: 146304
On Mar 11, 2:41=A0am, Tier Logic <jeff.ka...@gmail.com> wrote: > On Mar 10, 10:54=A0pm, Kim Enkovaara <kim.enkova...@iki.fi> wrote: > > > > > -jg wrote: > > > =A0Doing the stacking at the wafer/fab level saves handling, but you = are > > > exposed to testing issues. > > > > =A0Until you have finished the two-process flows, you really have > > > nothing to test. So yields are ?? > > > I don't see it impossible to test the cmos wafer before the tft process= , > > depending on how they constructed the topmost metal layer. For example > > they could have some dummy bump pads there for power, and then internal > > bist with certain coverage built with the lower metal layers. > > > FPGAs are quite easy in terms of yield, because you can have redundant > > structures for yield improvement (in fabric and in memories). > > > --Kim > > I don't want to marketeer too much here because this is a technical > site. All I can tell you is that our FPGAs will save you money because > our die size is reduced. If you convert it to our ASIC you will save > even more money for a minimal NRE. ( zero NRE for early access > customers). > > I did want to clear up any questions on our testing methodolgies. > > Our FPGA is tested the same as any other SRAM based FPGA is tested. > The only difference is that our configuration SRAM is above the CMOS > base layer in a second active layer resulting in a smaller die size. > We test it to 100% functional FPGA test patterns in the same way any > other SRAM-based FPGA is tested. There is no need to test until the > complete FPGA is done being processed. > > The TierASIC is tested with a scan-based ASIC methodology we added to > the silicon. The customer is not required to generate any test > vectors. Once you lock your design, you simply send us the bitstream > and we auto generate the test vectors for your ASIC. We create one M9 > hard mask and stop there. I do believe that the significant cost > reduction in moving from our FPGA to our ASIC will make it a popular > choice. You also get the advantages of no possibility of configuration > SEUs, bitstream security, no config rom needed, instant on, and > customer logos. > > The yield of the ASIC increases over the yield of the FPGA because the > ASIC is only tested to that customer pattern. Also, the timing is > identical to the FPGA version which means zero conversion risk. We can > deliver ASIC samples in about 4 weeks. > > Jeff Thanks for returning to talk further about the issues brought up on this newsgroup. I wish you success in achieving noticeable market share. While I can understand the savings brought from removing the configuration memories and associated die size, I still envision the FPGA-like overhead as being significant since routing is such a large portion of typical FPGA resources. The routing won't go away, only the configuration memory. The result is still not an ASIC-killer unless the price-point for (for instance) 40nm TierASIC devices can compare at a price point with a couple steps behind in the process curve. If a turn on a 90nm or 120nm ASIC can produce similar cost points in per-piece costs to a TierASIC on the tighter process, there might be something here. We all understand the issues of NRE and timing. But it's so much smoke and mirrors at this point it's hard for the big customers who have ASIC suppliers or the medium customers who need to *work* to get access to information to really consider the technology. If the TierASIC information becomes more transparent to demonstrate the true "technical" savings versus the ASIC approach rather than "marketing" savings, TierASIC won't disturb the engineers responsible for *considering* the technology in the first place. Marketing people tend not to sway engineers with marketing; they tend to influence with the technology proposition and real associated cost points. In my mind, it's a question of whether there's a desire to market to the select few who aren't concerned about risk versus reward within their company framework or to the many who may see how large the reward is and how low the risk is in the end. I want better solutions to succeed. *I* won't pursue vague promises but I'll consider real information. Big difference.Article: 146305
I want to be able to generate an encrypted netlist of a core using Quartus. Does Quartus have a switch that allows you to compile a design that doesn't fit into an FPGA? The issue is that the ports on the core exceed the number of pins on any device.Article: 146306
"General Schvantzkoph" <schvantzkoph@yahoo.com> wrote in message news:7vsf4tFaprU4@mid.individual.net... >I want to be able to generate an encrypted netlist of a core using > Quartus. Does Quartus have a switch that allows you to compile a design > that doesn't fit into an FPGA? The issue is that the ports on the core > exceed the number of pins on any device. There are settings to allow you to compile lower level entites for incremental compilation, although I can't remember what they are. A quick search shows that 'virtual pin' might be what you're looking for, or lead in the right direction. Nial.Article: 146307
The fastest multiplier in fpga for dsps are designed using a method using Vedic mathematics technique.. To view tha technology go the iete journals website.. http://jr.ietejournals.org/downloadpdf.asp?issn=0377-2063;year=2009;volume=55;issue=6;spage=282;epage=286;aulast=Pushpangadan;type=2 http://jr.ietejournals.org/article.asp?issn=0377-2063;year=2009;volume=55;issue=6;spage=282;epage=286;aulast=Pushpangadan;type=0Article: 146308
John, Correct, you got it. I was always looking for the best solution when I did my stint as a design engineer from 1978 to 1998 in telecom. The last thing on my list was the vendor: there were many things ahead of that (although, the vendor, and their history is important, too). So, in the telecoms business: here was my order of importance: 1. Price 2. Price 3. Price. 4. Power: (yes, even so long ago, telecoms were 'green') 5. Availability: (if it had the advantages above, I would pre-order, stock, and do whatever needed to get those advantages) 6. Performance (I would make do with less performance if I had the advantages above) 7. Reliability: (I would burn in, re-test, operate at a lower temperature, if I could get an inexpensive part to meet my 20 year life requirement) 8. Vendor: (support, applications notes, demo boards, free IP, code, tools -- I would tolerate a lot missing here to meet the first three goals) The bottom line was if I could make an equivalent, or better product, for a lower cost than my competition, I would get the contract. I once lost a million dollar contract to MCI being just $0.01 more expensive on a $125 circuit pack than my competition, so it was (and still is) a rough real world out there. So, for those who used to do what I did, log onto their website, give them the required information, download the information, and get going. AustinArticle: 146309
On Thu, 11 Mar 2010 15:25:58 +0000, Nial Stewart wrote: > "General Schvantzkoph" <schvantzkoph@yahoo.com> wrote in message > news:7vsf4tFaprU4@mid.individual.net... >>I want to be able to generate an encrypted netlist of a core using >> Quartus. Does Quartus have a switch that allows you to compile a design >> that doesn't fit into an FPGA? The issue is that the ports on the core >> exceed the number of pins on any device. > > > There are settings to allow you to compile lower level entites for > incremental compilation, although I can't remember what they are. > > A quick search shows that 'virtual pin' might be what you're looking > for, or lead in the right direction. > > > Nial. Thanks, that sounds like what I want.Article: 146310
On Mar 10, 7:01=A0pm, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote: > > I wouldn't take it as a given that most resets are not already > synchronized to the clock domains. =A0Resets are routinely used based on > termination count, end of packet, return to state0 from other states > or invalid states, etc.... =A0All of these cases would be within the > same clock domain. OK, now I see where we're missing each other: in the definition of what constitutes a reset function. When I say reset, I mean "device initialization", either upon power up, power failure, BIT failure, system watchdog event, etc. that resets darn near the whole device. When you say "reset" you mean anytime the logic loads a zero or other static value into a counter, etc. Our best practice policies forbid combining local, functional "restarts" like you mentioned, with the reset (initialization) circuit, or from using asynchronous (causal) reset/preset controls for them. If a register is not reset for initialization, it is acceptable to use a synchronous reset control on a register for functional restarts, etc. There are a few cases where asynchronous controls are acceptable in sync boundaries, etc. but they are subject to specific review and approval. These "restart" functions are, as you say, often triggered by synchronous signals anyway, and are part of the functional nature of the design, not part of the initialization. But for device initialization, our best practices recommend the use of asynchronous resets where possible (subject to device support). > Placing the onus of creating a reliable design on software tools to > correctly determine the designer's intent for timing paths that are > "non-obvious" is not a working solution IMHO. Why is it not obvious that an output from a same clocked register, driving an asynchronous reset input to another same-clocked register, should be checked for the timing relative to that clock? Every asynchronously reset register still has a setup and hold requirement for the deasserting edge of that reset input, so check it the same way you check synchronous inputs (i.e. depending upon the clock domain of the driven input). You don't even have to check any asyncrhonous paths through the registers to do this. Thanks, AndyArticle: 146311
On Mar 10, 8:44=A0pm, Weng Tianxiang <wtx...@gmail.com> wrote: > Andy, > "Some synthesis tools may be getting smart enough to optimize an > inferred latch from a combinatorial process into a clock enable on > the > corresponding register implied by the clocked process. But if there > are any other combinatorial processes that use that latched output of > the first combinatorial process, then the latch cannot be replaced by > a clock enable on a register. " > > I am interested in above your comment. Can you list an example to show > "the latch cannot be replaced by > a clock enable on a register" Example: A <=3D B when ENABLE; -- implies a latch for A C <=3D A when rising_edge(CLK); -- a register using A E <=3D A or D; -- another combinatorial function using A If not for E, the latch could be replaced by a clock enable on the C register. I suppose C could still use a clock enable and the B input directly, but it does not wholly "replace" the latch, because the latch is still needed to derive E. AndyArticle: 146312
John_H <newsgroup@johnhandwork.com> wrote: >On Mar 10, 11:46=A0am, Tier Logic <jeff.ka...@gmail.com> wrote: >> The world's first 3D FPGA has arrived! We have a very compelling and >> cost effective solution. >> >> Come check it out folks.www.tierlogic.com >> >> Jeff > >Sad. > >I have a passing interest in anything proclaiming itself "new" and >"revolutionary" but I won't bother to register to get more >information. > >I *might* have the next $1M+ design but it will go to standard FPGAs >because I can't find out about the promising technology on a casual >basis. I agree. We use a 1000+ ARM controllers per year. Luminary required registration so they didn't even make it on the short list. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------Article: 146313
On Mar 12, 3:32=A0am, John_H <newsgr...@johnhandwork.com> wrote: > > The TierASIC is tested with a scan-based ASIC methodology we added to > > the silicon. The customer is not required to generate any test > > vectors. Once you lock your design, you simply send us the bitstream > > and we auto generate the test vectors for your ASIC. The claim of Auto-generate test vectors is interesting. Who pays for < 100% coverage 'issues' ? > While I can understand the savings brought from removing the > configuration memories and associated die size, I still envision the > FPGA-like overhead as being significant since routing is such a large > portion of typical FPGA resources. =A0 The memory has not gone away, in the FPGA flow is it merely stacked, so die size has shifted to more process steps. Raw silicon is actually quite cheap. Even in their ASIC flow, that 'memory ghost' remains, as the die size is locked to the larger of the two possible choices. Their fpga to asic step saving is some process steps, testing savings, and yield gains as they hope you are not using defects. Where die size savings really kick in, is when they allow MORE logic into what is a 'practical size ceiling' - but we still have no indications of WHO their customers are ? - no logic or package info ?. If your package is IO bound, then die size claims are totally illusory. -jgArticle: 146314
On Mar 11, 9:29=A0am, Andy <jonesa...@comcast.net> wrote: > On Mar 10, 8:44=A0pm, Weng Tianxiang <wtx...@gmail.com> wrote: > > > Andy, > > "Some synthesis tools may be getting smart enough to optimize an > > inferred latch from a combinatorial process into a clock enable on > > the > > corresponding register implied by the clocked process. But if there > > are any other combinatorial processes that use that latched output of > > the first combinatorial process, then the latch cannot be replaced by > > a clock enable on a register. " > > > I am interested in above your comment. Can you list an example to show > > "the latch cannot be replaced by > > a clock enable on a register" > > Example: > > A <=3D B when ENABLE; -- implies a latch for A > C <=3D A when rising_edge(CLK); -- a register using A > E <=3D A or D; -- another combinatorial function using A > > If not for E, the latch could be replaced by a clock enable on the C > register. I suppose C could still use a clock enable and the B input > directly, but it does not wholly "replace" the latch, because the > latch is still needed to derive E. > > Andy Andy, We don't argue about the latch replacement as you show. What we argue about is when a fast next state signal StateA_NS is replaced by a slower latched version. It occurs if a condition in an if statement misses a signal assignment statement as we have been discussing about. WengArticle: 146315
On Mar 11, 1:38=A0pm, Weng Tianxiang <wtx...@gmail.com> wrote: > On Mar 11, 9:29=A0am, Andy <jonesa...@comcast.net> wrote: > > > > > > > On Mar 10, 8:44=A0pm, Weng Tianxiang <wtx...@gmail.com> wrote: > > > > Andy, > > > "Some synthesis tools may be getting smart enough to optimize an > > > inferred latch from a combinatorial process into a clock enable on > > > the > > > corresponding register implied by the clocked process. But if there > > > are any other combinatorial processes that use that latched output of > > > the first combinatorial process, then the latch cannot be replaced by > > > a clock enable on a register. " > > > > I am interested in above your comment. Can you list an example to sho= w > > > "the latch cannot be replaced by > > > a clock enable on a register" > > > Example: > > > A <=3D B when ENABLE; -- implies a latch for A > > C <=3D A when rising_edge(CLK); -- a register using A > > E <=3D A or D; -- another combinatorial function using A > > > If not for E, the latch could be replaced by a clock enable on the C > > register. I suppose C could still use a clock enable and the B input > > directly, but it does not wholly "replace" the latch, because the > > latch is still needed to derive E. > > > Andy > > Andy, > We don't argue about the latch replacement as you show. What we argue > about is when a fast next state signal StateA_NS is replaced by a > slower latched version. > It occurs if a condition in an if statement misses a signal assignment > statement as we have been discussing about. > > Weng- Hide quoted text - > > - Show quoted text - I'm not sure why you are concerned about this. Everyone seems to agree that inferring latches is not a good idea. The fact that it slows performance (at least in FPGA's, see below) is just one more reason to avoid them. It just so happens that in FPGAs, the clock enable mux is always there on the front of the flip-flop, so there is no timing penalty whether you actually use it or not, and adding a latch in front of it is always slower. In other technologies, you have a choice of having a clock enabled or regular flip-flop, with the latter being faster. Now the question is, in such a technology, is the latch plus regular flop faster than the clock enabled flop? AndyArticle: 146316
The extra processing steps for the TFT do cost more. However, the die size reduction swamps that out to create a low cost FPGA. The ASIC gets rid of that extra cost and benefits from the yield improvement for an even lower cost solution. All I can tell you is come get a quote and we can save you money. Xilinx and Altera love all the skepticism here and want you to conitnue paying too much for your solutions. Regards, JeffArticle: 146317
Hello, Tier Logic wrote: > All I can tell you is come get a quote and we can save you money. it is a curious statement ! I assume that you have been too long in "stealth mode". Now I tell you this : "show me your public price list, your products, demo boards, detailed datasheet and distributors. Then maybe I'll choose you for a project". I'll take the example of a competitor. SiliconBlue has maybe "slow" chips (according to only one test I did) but they got it almost right for the rest, at least for me : - decent development tool (not bloated) that installs easily on Linux AND Windows ! - datasheet and other informations, enough to understand how it is ticking inside so it can be used - at least one distributor that talks to anyone (even though the distributor is not large, at least it does its job and doesn't scare potential customers) - unit price that is decent is small quantities. - ultra-low power is a plus but not critical for me. And still it's not functional enough for me. Antti has developped for it and I'm curious. Now before you can save me money, try to beat SBt, and then... beat the others :-P The Actel ProAsic3 family is working very fine for me and wonder how it can be displaced. good luck, > Regards, > Jeff yg -- http://ygdes.com / http://yasep.orgArticle: 146318
On 11 Mrz., 21:19, Tier Logic <jeff.ka...@gmail.com> wrote: > The extra processing steps for the TFT do cost more. However, the die > size reduction swamps that out to create a low cost FPGA. The ASIC > gets rid of that extra cost and benefits from the yield improvement > for an even lower cost solution. > > All I can tell you is come get a quote and we can save you money. > Xilinx and Altera love all the skepticism here and want you to > conitnue paying too much for your solutions. Isnt the biggest area in FPGAs covered by routing (lines & switches) which are still present in Tier Logic? Anyway it looks interesting to me and i have registered to evaluate further... But one thing i am concerned with is design security of the programmable devices.Article: 146319
On Mar 11, 12:19=A0pm, Tier Logic <jeff.ka...@gmail.com> wrote: > > Xilinx and Altera love all the skepticism here and want you to > conitnue paying too much for your solutions. > > Regards, > > Jeff Jeff, you should be ashamed of that cheap shot, especially when Austin earlier today invited the audience to check out your alleged lower prices. I can understand when a newcomer is aggressive in his claims, and nebulous in his explanations. But do not get sarcastic and nasty. You still have a lot to prove before you can climb on a high horse. Peter AlfkeArticle: 146320
On Mar 12, 9:19=A0am, Tier Logic <jeff.ka...@gmail.com> wrote: > > All I can tell you is come get a quote and we can save you money. > Xilinx and Altera love all the skepticism here and want you to > conitnue paying too much for your solutions. So you have real, shipping silicon ? Great! You claim 'we can save you money', Great too!! - I love a clairvoyant supplier, who knows already what packages and prices points I have!!. - now tell me what packages, speeds and logic counts you offer, as before I can _actually_ 'save money' here in the real world, first the product actually has to be functional in a circuit board that I can sell !! -jgArticle: 146321
On Mar 12, 10:31=A0am, whygee <y...@yg.yg> wrote: > I'll take the example of a competitor. > SiliconBlue has maybe "slow" chips > (according to only one test I did).... > The Actel ProAsic3 family is working very fine > for me and wonder how it can be displaced. We have ProASIC3 and SiliconBlue on a short list. [maybe SmartFusion too, depends on $/package choices] I'm interested in how much slower were the SiliconBlue devices ? What tests did you do to compare them ? -jgArticle: 146322
-jg wrote: > On Mar 12, 10:31 am, whygee <y...@yg.yg> wrote: > We have ProASIC3 and SiliconBlue on a short list. > [maybe SmartFusion too, depends on $/package choices] wait a bit before things stabilize and the distributors sing to the same tune. I met Future and Actel France today at the annual parisian Actel seminar, I was not interested by their new offering, I'm waiting for an eventual next generation with a better SRAM/logic ratio. > I'm interested in how much slower > were the SiliconBlue devices ? > What tests did you do to compare them ? disclaimer : I'm not as good as Antti ;-) HE has the boards and can tell more acurate stories than mine. I "only" installed their SW, and tried to compile a simple adder design, probably http://yasep.org/VHDL/asu_rop2/testdiff.vhd (test nr 1) http://yasep.org/VHDL/asu_rop2/ASU_ROP2_16.vhd and got such a low MHz rating that I thought that I hit the wrong button or something like that. I tweaked many stuff and could not influence the result much, tried different architectures... and I gave up. It just means that it did not meet my expectations. I know that SBt's chips are created for ultraultralow power and low speed. I'm not expecting Virtex performance but i'm demanding anyway ;-) If you want acurate figures, I prefer that you try yourself, because i'm not sure why it is slow. i've read "80MHz performance" or something like that in the datasheets at the time but like other FPGA claims, i'm not able to reach them. I've seen people able to do about 300MHz designs with ProASIC, I can only do 100MHz and Actel's soft ARM maxes at around 60MHz... for a chip that is meant to be "able of 350MHz". so test yourself :-) > -jg yg -- http://ygdes.com / http://yasep.orgArticle: 146323
Hi all, I just wanna get some feedback if I understood this correctly: Althoug there is something out called gate-equivalent, it essentially does not make sense to compare FPGA with ASIC implementations. The reason is that ASICs actually require one gate for each logical operation in a logical expression. For instance, d = a AND b OR c requires two logical gates. In contrast, on FPGAs there are LUTS that permit to implement any complex expression with up to 4 inputs that do not produce more than 1 output. So that is basically the reason why this gate-equivalent metric is not accurate since we can never be sure how many logical expression are combined in a LUT. Does this argumentation make a bit of sense? ThanksArticle: 146324
On Fri, 12 Mar 2010 00:55:39 +0000 Marcus <MJones@hotmail.com> wrote: > Hi all, > > I just wanna get some feedback if I understood this correctly: > > Althoug there is something out called gate-equivalent, it essentially > does not make sense to compare FPGA with ASIC implementations. The > reason is that ASICs actually require one gate for each logical > operation in a logical expression. For instance, d = a AND b OR c > requires two logical gates. In contrast, on FPGAs there are LUTS > that permit to implement any complex expression with up to 4 inputs > that do not produce more than 1 output. So that is basically the > reason why this gate-equivalent metric is not accurate since we > can never be sure how many logical expression are combined in a LUT. > > Does this argumentation make a bit of sense? > > Thanks It's not even as simple as that. While you could design that combinational circuit as a casacade of two logic gates, you could also roll it into a 3-input compound gate and save a couple of transistors. Then if the output could stand inversion, you'd save a couple more by making your output D# instead of D. That's assuming that everything remains CMOS; if you needed to really get aggressive you could play tricks with dynamic logic that you just can't do in an FPGA. -- Rob Gaddi, Highland Technology Email address is currently out of order
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z