Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Kevin Neilson <kevin.neilson@xilinx.com> wrote: > I've always wondered. So many companies have entered and then departed, > leaving the duopoly. I think it must be the problem of developing the > tools. Tools, patents, and X and A buying up competitors I suspect. > Does anybody really use partial reconfiguration, years and years after it > was introduced? All the "high-level" synthesis tools are either defunct > or should be defunct. Both are getting used in the push for FPGAs to become cloud accelerators (eg Microsoft, Amazon, Intel). The application code (defined by some middleware, eg OpenCL) is HLSed into some block which is partially-reconfigured into an FPGA that's in the server running the cloud app(s). The outer ring of the FPGA (memory controllers, networking, PCIe, etc) stays the same, and different apps are partially reconfigured in and out. Linux now has kernel support for this. TheoArticle: 160701
On Thursday, October 18, 2018 at 1:23:33 PM UTC-6, gnuarm.del...@gmail.com = wrote: > On Thursday, October 18, 2018 at 2:41:32 PM UTC-4, Kevin Neilson wrote: > > On Thursday, October 18, 2018 at 9:22:47 AM UTC-6, gnuarm.del...@gmail.= com wrote: > > > I was wondering what the barriers are to new companies marketing FPGA= s. Some of the technological barriers are obvious. Designing a novel devi= ce is not so easy as the terrain is widely explored, so I expect any new pl= ayer would need to find a niche application of an unexplored technological = feature. =20 > > >=20 > > > Silicon Blue exploited a low power technology optimized for low cost = devices in mobile applications. They were successful enough to be bought b= y Lattice and are still in production with the product line expanded consid= erably. =20 > > >=20 > > > I believe Achronix started out with the idea of asynchronous logic. = I'm not clear if they continue to use that or not, but it is not apparent f= rom their web site. Their target is ultra fast clock speeds enabling FPGAs= in new market. I don't see then showing up on FPGA vendor lists so I assu= me they are sill pretty low volume.=20 > > >=20 > > > Tabula was based on 3D technology, but they don't appear to have last= ed. I believe they were also claiming an ability to reconfigure logic in r= eal time which sounds like a very complex technology to master. Not sure w= hat market they were targeting.=20 > > >=20 > > > Other than the technologies, what other barriers do new FPGA companie= s face?=20 > > >=20 > > > Rick C. > >=20 > > I've always wondered. So many companies have entered and then departed= , leaving the duopoly. I think it must be the problem of developing the to= ols. As poor as they are, I think that might be the biggest impediment. E= very grand new idea seems to flounder in the face of what works. Most inno= vations from Xilinx itself seem to flounder. Does anybody really use parti= al reconfiguration, years and years after it was introduced? All the "high= -level" synthesis tools are either defunct or should be defunct. >=20 > I actually begged Xilinx for partial reconfiguration for many years. Wha= t they eventually offered was so crappy that I never was able to use it... = plus my need had gone by then. =20 >=20 > No sure what you mean about "high level" synthesis. Are you talking abou= t something above HDL? Is this graphical?=20 >=20 > Rick C. I put "high level" in quotes because most of the high-level tools end up be= ing very non-abstract if you actually want to meet timing. I'm talking abo= ut all the Matlab-to-gates, C-to-gates, and graphical tools like System Gen= erator, etc. None have ever panned out. And even HDL still has to be used= at a pretty low level. There was also hardware-in-the-loop simulation (for example, using System G= enerator) and I don't know if that's still used by anybody or not.Article: 160702
In article <5b57a396-9b89-4e42-b9fd-4662782bc801@googlegroups.com>, Kevin Neilson <kevin.neilson@xilinx.com> wrote: >I've always wondered. So many companies have entered and then departed, >leaving the duopoly. I think it must be the problem of developing the >tools. As poor as they are, I think that might be the biggest impediment. >Every grand new idea seems to flounder in the face of what works. Most >innovations from Xilinx itself seem to flounder. Does anybody really >use partial reconfiguration, years and years after it was introduced? >All the "high-level" synthesis tools are either defunct or should be >defunct. The industry is certainly worse off because of the lack of competition. Xilinx makes good technology, but their front end is simply awful. EDA is hard. Trying to keep the "sell the hardware, give away the tools" mentality has led the industry to accept an astonishly bad "situation normal" solution. The echo chamber of these in house developer's conversations must be deafening. The amount of money and personel spent on developing in-house "free" EDA, is likely staggering. And the hope of these "high-level" tools being solved by a semiconductor vendor now when the entire EDA industry has been attempting (and failing) to solve this problem for over 20 years? Nil. The industry really needs more competion in this arena. Will it happen with the two patent gorillas in the room? I don't see much... Regards, MarkArticle: 160703
On Thursday, October 18, 2018 at 9:22:47 AM UTC-6, gnuarm.del...@gmail.com = wrote: > I was wondering what the barriers are to new companies marketing FPGAs. = Some of the technological barriers are obvious. Designing a novel device i= s not so easy as the terrain is widely explored, so I expect any new player= would need to find a niche application of an unexplored technological feat= ure. =20 >=20 > Silicon Blue exploited a low power technology optimized for low cost devi= ces in mobile applications. They were successful enough to be bought by La= ttice and are still in production with the product line expanded considerab= ly. =20 >=20 > I believe Achronix started out with the idea of asynchronous logic. I'm = not clear if they continue to use that or not, but it is not apparent from = their web site. Their target is ultra fast clock speeds enabling FPGAs in = new market. I don't see then showing up on FPGA vendor lists so I assume t= hey are sill pretty low volume.=20 >=20 > Tabula was based on 3D technology, but they don't appear to have lasted. = I believe they were also claiming an ability to reconfigure logic in real = time which sounds like a very complex technology to master. Not sure what = market they were targeting.=20 >=20 > Other than the technologies, what other barriers do new FPGA companies fa= ce?=20 >=20 > Rick C. I put a little more thought into this: what if I wanted to start an FPGA c= ompany? I could try to find an innovation or new niche, but that usually f= ails, partly because people don't want to migrate to something new. I sure= don't. Say I want to make regular FPGA. First I have to make the silicon, which i= s hard, but let's assume I use a regular architecture with 6-input LUTs and= maybe some block RAMs and DSP multipliers. No processor cores or anything= . I wouldn't want to try to make my own simulator. I know FPGA companies = try to make their own so customers can get a cheap version but I'd try to a= void that. I'd also farm out the synthesis as much as possible. I'd get S= ynplify to do that. I still have to make the place & route tool and timing= analysis tools unless I can find somebody who is already doing that and ca= n just have them adopt my architecture. So now I have a pretty standard FPGA, and maybe some tools, but I still hav= e to compete with the established duopoly and their marketing and distribut= ion networks. Could I compete on price? I doubt it. I'm not sure anybody= has a compelling reason to switch to me. =20Article: 160704
On Thursday, October 18, 2018 at 10:53:31 PM UTC-4, Kevin Neilson wrote: > On Thursday, October 18, 2018 at 9:22:47 AM UTC-6, gnuarm.del...@gmail.co= m wrote: > > I was wondering what the barriers are to new companies marketing FPGAs.= Some of the technological barriers are obvious. Designing a novel device= is not so easy as the terrain is widely explored, so I expect any new play= er would need to find a niche application of an unexplored technological fe= ature. =20 > >=20 > > Silicon Blue exploited a low power technology optimized for low cost de= vices in mobile applications. They were successful enough to be bought by = Lattice and are still in production with the product line expanded consider= ably. =20 > >=20 > > I believe Achronix started out with the idea of asynchronous logic. I'= m not clear if they continue to use that or not, but it is not apparent fro= m their web site. Their target is ultra fast clock speeds enabling FPGAs i= n new market. I don't see then showing up on FPGA vendor lists so I assume= they are sill pretty low volume.=20 > >=20 > > Tabula was based on 3D technology, but they don't appear to have lasted= . I believe they were also claiming an ability to reconfigure logic in rea= l time which sounds like a very complex technology to master. Not sure wha= t market they were targeting.=20 > >=20 > > Other than the technologies, what other barriers do new FPGA companies = face?=20 > >=20 > > Rick C. >=20 > I put a little more thought into this: what if I wanted to start an FPGA= company? I could try to find an innovation or new niche, but that usually= fails, partly because people don't want to migrate to something new. I su= re don't. >=20 > Say I want to make regular FPGA. First I have to make the silicon, which= is hard, but let's assume I use a regular architecture with 6-input LUTs a= nd maybe some block RAMs and DSP multipliers. No processor cores or anythi= ng. I wouldn't want to try to make my own simulator. I know FPGA companie= s try to make their own so customers can get a cheap version but I'd try to= avoid that. I'd also farm out the synthesis as much as possible. I'd get= Synplify to do that. I still have to make the place & route tool and timi= ng analysis tools unless I can find somebody who is already doing that and = can just have them adopt my architecture. >=20 > So now I have a pretty standard FPGA, and maybe some tools, but I still h= ave to compete with the established duopoly and their marketing and distrib= ution networks. Could I compete on price? I doubt it. I'm not sure anybo= dy has a compelling reason to switch to me. Yes, with the approach you describe, being the new, mediocre FPGA company, = success is anything but assured. =20 I think the given is that there has to be something different about your de= vices or at least your approach. I donn't know if the Silicon Blue devices= are as much different technologically or if they simply used conventional = features with a different focus. I do know that when Lattice took them ove= r they bent the - still in final development - iCE40 devices back toward th= e mainstream with higher speeds and losing a bit on the low static power. = Not a big change, but interesting none the less.=20 Rick C.Article: 160705
On 18/10/2018 19:41, Kevin Neilson wrote: > On Thursday, October 18, 2018 at 9:22:47 AM UTC-6, gnuarm.del...@gmail.com wrote: ..snip >> >> Rick C. > > I've always wondered. So many companies have entered and then departed, leaving the duopoly. I think it must be the problem of developing the tools. As poor as they are, I think that might be the biggest impediment. Every grand new idea seems to flounder in the face of what works. Most innovations from Xilinx itself seem to flounder. Does anybody really use partial reconfiguration, years and years after it was introduced? Yes of course, why do you think companies like Xilinx spend so much money on developing it? Is it to make sure their developers are not getting bored, they have money to burn or is it simply because their high-end customers (clearly not you) require it? If you work on a complex bit of IP which required physical synthesis, hand placement etc to meet timing wouldn't you want a way to preserve it? Do you think companies will simply re-synthesize, P&R and re-validate the whole design (or IP block) again after making a small change? All the "high-level" synthesis tools are either defunct or should be defunct. Why should be defunct? Playing with Flipflops, shift registers, luts etc is nothing more than assembly for programmable logic. There is nothing more painful to discovered that your carefully hand-crafted RTL design which you spend many man-years of effort on requires and extra pipeline stage or you need to reduce resources as you are over resourced. Wouldn't it be nice if you could write your design in sequential untimed code and use a tool to generate the architecture for you based on resource and timing constraints? There are some very successful HLS tools (CatapultC, Stratus, Synphony) but given their price they are mainly used by high-end companies (the so called 20%). Xilinx's Vivado HLS is the exception as it is quite affordable but from what I understand not as capable as the others. Yes I know it still requires tweaking and FPGA/ASIC know-how but these tools are in full production for many years and are used successfully by many companies. More and more EDA supplier are starting to offer these tools, they can't be all wrong, right? Hans. www.ht-lab.comArticle: 160706
> Wouldn't it be nice if you could write your design in sequential untimed > code and use a tool to generate the architecture for you based on > resource and timing constraints? Well, yes, it would be nice if such a tool existed. It doesn't. If it did, people wouldn't be paying me to make hand-pipelined designs. People wouldn't pay me to spend two months doing what I can model in Matlab in two lines of code.Article: 160707
On Thursday, October 18, 2018 at 9:48:47 PM UTC-6, gnuarm.del...@gmail.com = wrote: > On Thursday, October 18, 2018 at 10:53:31 PM UTC-4, Kevin Neilson wrote: > > On Thursday, October 18, 2018 at 9:22:47 AM UTC-6, gnuarm.del...@gmail.= com wrote: > > > I was wondering what the barriers are to new companies marketing FPGA= s. Some of the technological barriers are obvious. Designing a novel devi= ce is not so easy as the terrain is widely explored, so I expect any new pl= ayer would need to find a niche application of an unexplored technological = feature. =20 > > >=20 > > > Silicon Blue exploited a low power technology optimized for low cost = devices in mobile applications. They were successful enough to be bought b= y Lattice and are still in production with the product line expanded consid= erably. =20 > > >=20 > > > I believe Achronix started out with the idea of asynchronous logic. = I'm not clear if they continue to use that or not, but it is not apparent f= rom their web site. Their target is ultra fast clock speeds enabling FPGAs= in new market. I don't see then showing up on FPGA vendor lists so I assu= me they are sill pretty low volume.=20 > > >=20 > > > Tabula was based on 3D technology, but they don't appear to have last= ed. I believe they were also claiming an ability to reconfigure logic in r= eal time which sounds like a very complex technology to master. Not sure w= hat market they were targeting.=20 > > >=20 > > > Other than the technologies, what other barriers do new FPGA companie= s face?=20 > > >=20 > > > Rick C. > >=20 > > I put a little more thought into this: what if I wanted to start an FP= GA company? I could try to find an innovation or new niche, but that usual= ly fails, partly because people don't want to migrate to something new. I = sure don't. > >=20 > > Say I want to make regular FPGA. First I have to make the silicon, whi= ch is hard, but let's assume I use a regular architecture with 6-input LUTs= and maybe some block RAMs and DSP multipliers. No processor cores or anyt= hing. I wouldn't want to try to make my own simulator. I know FPGA compan= ies try to make their own so customers can get a cheap version but I'd try = to avoid that. I'd also farm out the synthesis as much as possible. I'd g= et Synplify to do that. I still have to make the place & route tool and ti= ming analysis tools unless I can find somebody who is already doing that an= d can just have them adopt my architecture. > >=20 > > So now I have a pretty standard FPGA, and maybe some tools, but I still= have to compete with the established duopoly and their marketing and distr= ibution networks. Could I compete on price? I doubt it. I'm not sure any= body has a compelling reason to switch to me. >=20 > Yes, with the approach you describe, being the new, mediocre FPGA company= , success is anything but assured. =20 >=20 > I think the given is that there has to be something different about your = devices or at least your approach. I donn't know if the Silicon Blue devic= es are as much different technologically or if they simply used conventiona= l features with a different focus. I do know that when Lattice took them o= ver they bent the - still in final development - iCE40 devices back toward = the mainstream with higher speeds and losing a bit on the low static power.= Not a big change, but interesting none the less.=20 >=20 > Rick C. For some satellite work I used the Microsemi RTAX, which filled a niche for= rad-hard designs. It was slow, had few gates, could only be burned once, = and had poor tools, but still had a small market. They made up for low vol= ume with high prices. I think they're still around. Since I work with a l= ot of Galois arithmetic, one thing I'd like to see is an FPGA with special = structures for Galois matrix multipliers (instead of, say, DSP48s) and matr= ix transposers, but I don't think the demand is enough to warrant a special= architecture.Article: 160708
On 10/19/18 12:39 PM, Kevin Neilson wrote: > For some satellite work I used the Microsemi RTAX, which filled a niche for rad-hard designs. It was slow, had few gates, could only be burned once, and had poor tools, but still had a small market. They made up for low volume with high prices. I think they're still around. Since I work with a lot of Galois arithmetic, one thing I'd like to see is an FPGA with special structures for Galois matrix multipliers (instead of, say, DSP48s) and matrix transposers, but I don't think the demand is enough to warrant a special architecture. Microsemi is still around (though part of Microchip now). They have a number of FPGA families, that are somewhat distinct from the big two.Article: 160709
On Saturday, October 20, 2018 at 12:28:51 PM UTC-4, Richard Damon wrote: > On 10/19/18 12:39 PM, Kevin Neilson wrote: > > For some satellite work I used the Microsemi RTAX, which filled a niche= for rad-hard designs. It was slow, had few gates, could only be burned on= ce, and had poor tools, but still had a small market. They made up for low= volume with high prices. I think they're still around. Since I work with= a lot of Galois arithmetic, one thing I'd like to see is an FPGA with spec= ial structures for Galois matrix multipliers (instead of, say, DSP48s) and = matrix transposers, but I don't think the demand is enough to warrant a spe= cial architecture. >=20 > Microsemi is still around (though part of Microchip now). They have a > number of FPGA families, that are somewhat distinct from the big two. I've never found the Actel devices to be a good solution for any of my prob= lems. Mostly they follow the same path that the big two follow regarding p= ackages, namely larger, more pins and more dollars that optimal for my desi= gns. I find Lattice is the only company that has much in the smaller packa= ges with a low enough price. =20 I wonder what Microchip will do with the FPGA product line now. I see they= have Atmel's CPLD/SPLD line as well as the rather obsolete AT40K products,= but nothing of the Actel devices. I guess they are presently running Micr= osemi as a separate company for now.=20 Rick C.Article: 160710
gnuarm.deletethisbit@gmail.com writes: > I believe Achronix started out with the idea of asynchronous logic. > I'm not clear if they continue to use that or not, but it is not > apparent from their web site. Their target is ultra fast clock speeds > enabling FPGAs in new market. I don't see then showing up on FPGA > vendor lists so I assume they are sill pretty low volume. I think Achronix is embedded FPGAs only at this point. > Tabula was based on 3D technology, but they don't appear to have > lasted. I believe they were also claiming an ability to reconfigure > logic in real time which sounds like a very complex technology to > master. Not sure what market they were targeting. I spoke with an ex-Achronix guy a few years ago in a conference. He was confident that Tabula never had their mystical reconfiguring tech working and that the whole company was a scam. He basically said they are good with muxes and suck with random logic. Tabula was at the conference demonstrating some kind of ethernet switch which would be mostly muxes... No idea if he was right or wrong. He was expecting Achronix to go under too. Tabula closed in 2015 and apparently Altera hired some of the team and maybe some of Tabula's IP. But I don't see them putting out anything resembling what Tabula claimed to have. > Other than the technologies, what other barriers do new FPGA companies face? Really the question is, how much better than Xilinx or Intel would you need to be to break into the market? You'd probably need a billionaire who'd want to disrupt the market. Musk and Tesla come to mind. Other possibility I can think of is state funded efforts from China, I think I read they've increased funding for hardware research considerably. I remember an interview with a Flex Logix founder, they do embedded FPGAs only. He basically said he had no interest in trying to compete with the two giants and so he found a niche. The niche may have grown, Achronix is there too and I think I heard old timer QuickLogic also plays in that business now. Probably some other startups too. Come to think of it, it would be interesting to know what companies and what chips actually integrate FPGAs and do they farm out the design work? Or is it the embedded FPGA provider who does the design work for the programmable part?Article: 160711
On Tuesday, October 23, 2018 at 6:50:48 AM UTC-4, Anssi Saari wrote: > gnuarm.deletethisbit@gmail.com writes: >=20 > > I believe Achronix started out with the idea of asynchronous logic. > > I'm not clear if they continue to use that or not, but it is not > > apparent from their web site. Their target is ultra fast clock speeds > > enabling FPGAs in new market. I don't see then showing up on FPGA > > vendor lists so I assume they are sill pretty low volume. >=20 > I think Achronix is embedded FPGAs only at this point. >=20 > > Tabula was based on 3D technology, but they don't appear to have > > lasted. I believe they were also claiming an ability to reconfigure > > logic in real time which sounds like a very complex technology to > > master. Not sure what market they were targeting. >=20 > I spoke with an ex-Achronix guy a few years ago in a conference. He was > confident that Tabula never had their mystical reconfiguring tech > working and that the whole company was a scam. He basically said they > are good with muxes and suck with random logic. Tabula was at the > conference demonstrating some kind of ethernet switch which would be > mostly muxes... No idea if he was right or wrong. He was expecting > Achronix to go under too. Tabula closed in 2015 and apparently Altera > hired some of the team and maybe some of Tabula's IP. But I don't see > them putting out anything resembling what Tabula claimed to have. >=20 > > Other than the technologies, what other barriers do new FPGA companies = face?=20 >=20 > Really the question is, how much better than Xilinx or Intel would you > need to be to break into the market? You'd probably need a billionaire > who'd want to disrupt the market. Musk and Tesla come to mind. Other > possibility I can think of is state funded efforts from China, I think I > read they've increased funding for hardware research considerably. >=20 > I remember an interview with a Flex Logix founder, they do embedded > FPGAs only. He basically said he had no interest in trying to compete > with the two giants and so he found a niche. The niche may have grown, > Achronix is there too and I think I heard old timer QuickLogic also > plays in that business now. Probably some other startups too. Yes, I guess embedded PIP (Programmable Intellectual Property) is a niche. = I think there are other niches. Lattice found (or bought a company who fo= und) a niche for low power, small FPGAs. I expect there are others. Or yo= u can market devices differently. I suppose X and A know where their bread= is buttered, but I have always felt that FPGAs were underexploited and cou= ld very easily be used like MCUs if they were marketed like MCUs. Lots of = flavors in lots of packages. Xilinx has always acted like it can't afford = to produce a wider range of packages. 10,000 LUTs is still a small chip. = Give it as many I/Os as a 48 pin QFP will support and I think it will be ab= le to do a lot more than MCUs. Some people think the propeller is great, b= ut you could have 30, 40 or even 50 small soft-cores in a small FPGA all wo= rking independently. =20 > Come to think of it, it would be interesting to know what companies and > what chips actually integrate FPGAs and do they farm out the design > work? Or is it the embedded FPGA provider who does the design work for > the programmable part? Having all the work done by the FPGA provider would limit the number of app= lications. =20 Rick C.Article: 160712
On 19/10/2018 17:14, Kevin Neilson wrote: >> Wouldn't it be nice if you could write your design in sequential untimed >> code and use a tool to generate the architecture for you based on >> resource and timing constraints? > > Well, yes, it would be nice if such a tool existed. It doesn't. It does. If you ever visit DVCon, DAC, DATE etc go to the Mentor/Synopsys/Cadence/Xilinx stand and tell them you are interested in architectural exploration using untimed C/C++ code. If it did, people wouldn't be paying me to make hand-pipelined designs. What about a) your clients are not willing to change their design to untimed C/C++ code? b) it is cheaper to pay a contractor for 2 month than it is to pay $100K+ for an HLS tool? People wouldn't pay me to spend two months doing what I can model in Matlab in two lines of code. > I do hope you are not working for Xilinx as they will call you in for a mandatory training session on Vivado's HLS :-) https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html After watching this consider what a top of the range HLS tool can do. Hans www.ht-lab.comArticle: 160713
Niches have to be large enough to allow the costs of the masks and engineering to be recovered. The use of open source tools let the ICE40 be used in applications such as Raspberry Pi "hats", but the 8Kluts limit is restricting this niche. Some hobbyists long for DIP packages and 5V i/o. Though this is a tiny niche, using an obsolete node (like 250nm or 350nm) might make crowdfunding practical. It would probably be more popular than https://www.crowdsupply.com/chips4makers/retro-uc About SiliconBlue, part of their motivation were the expiration of a bunch of FPGA patents. Even more have expired since then. -- JecelArticle: 160714
On Friday, October 26, 2018 at 3:24:34 PM UTC-4, Jecel wrote: > Niches have to be large enough to allow the costs of the masks and engine= ering to be recovered. >=20 > The use of open source tools let the ICE40 be used in applications such a= s Raspberry Pi "hats", but the 8Kluts limit is restricting this niche. This isn't a niche, it would need to grow to be a micro-niche. No FPGA ven= dor even thinks about this.=20 > Some hobbyists long for DIP packages and 5V i/o. Though this is a tiny ni= che, using an obsolete node (like 250nm or 350nm) might make crowdfunding p= ractical. It would probably be more popular than=20 >=20 > https://www.crowdsupply.com/chips4makers/retro-uc Time expired with only 18% funding raised. :( =20 Not sure you need to go way up to 350 nm. I expect the fab costs at 150 ar= en't so bad. That equipment was amortized a long time ago. I guess the re= al issue is mask costs. Not sure how bad that is at 150 nm, but I believe = you can still support 5 volt I/Os since many MCUs do it. =20 > About SiliconBlue, part of their motivation were the expiration of a bunc= h of FPGA patents. Even more have expired since then. That may be, but expired patents aren't really significant. The basic func= tionality of the LUT/FF and routing have been available for quite some time= now. The details of FPGA architectures only matter when you are competing= head to head. That's why Silicon Blue focused on a market segment that wa= s ignored by the big players. The big two chase the telecom market with ma= x capacity, high pin count barn burners and the other markets are addressed= with the same technology making it impossible to compete in the low power = areas. In the end no significant user who is considering an iCE40 part eve= n looks at a part from Xilinx or Altera.=20 Rick C.Article: 160715
> https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overv= iew.html >=20 > After watching this consider what a top of the range HLS tool can do. >=20 I know you're joking about HLS but the whole FPGA market is limited because= any real work must be done by experiences specialists working at a very lo= w level of abstraction. It really does take months to do what you can do i= n Matlab in a couple of hours. It's not really any different in that respe= ct than it was fifteen years ago. The market would be so much bigger if th= ings were just a bit easier. I haven't figured out if this is because it's= an inherently difficult problem or if the tool designers aren't that skill= ed. A little of both, I think. I still think the people working on HLS ar= e going in the wrong direction and need to spend more time refining the HDL= tools. They aim too high and fail utterly. Why can't I infer a FIFO? Ju= st start with that. The amount of time I spend on very basic structures is= crazy. I don't think any of this will change in the near future, though. = =20 (PS, I know you were probably actually serious about HLS.)Article: 160716
> That may be, but expired patents aren't really significant. The basic fu= nctionality of the LUT/FF and routing have been available for quite some ti= me now. The details of FPGA architectures only matter when you are competi= ng head to head. That's why Silicon Blue focused on a market segment that = was ignored by the big players. The big two chase the telecom market with = max capacity, high pin count barn burners and the other markets are address= ed with the same technology making it impossible to compete in the low powe= r areas. In the end no significant user who is considering an iCE40 part e= ven looks at a part from Xilinx or Altera.=20 >=20 > Rick C. I don't do hobbyist stuff anymore since I'm too busy with work but I would = think one could just use eval boards. I don't know why a DIP would be requ= ired. I don't know about the cost of the tools for a hobbyist, though. As for the high pin counts, I would think that the need would be mitigated = with all the high-speed serial interfaces.Article: 160717
On Friday, October 26, 2018 at 9:49:03 PM UTC-4, Kevin Neilson wrote: > > https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-ove= rview.html > >=20 > > After watching this consider what a top of the range HLS tool can do. > >=20 >=20 > I know you're joking about HLS but the whole FPGA market is limited becau= se any real work must be done by experiences specialists working at a very = low level of abstraction. It really does take months to do what you can do= in Matlab in a couple of hours. =20 I think this is rather exaggerated and comparing to Matlab coding is a bit = disingenuous. Are many projects coded in Matlab then they are done???=20 Coding in HDL is not really much different from coding in any high level la= nguage unless you have significant speed or capacity issues. Even then tha= t is not much different from programming CPUs. If you are short on CPU spe= ed or memory, you code very differently and spend a lot of time validating = your goals at every step.=20 Even then I have worked on a number of projects where the design was size a= nd/or speed constrained and it wasn't a debilitating burden... just like th= e CPU coding projects I've worked on. =20 I think the real difference is in the tasks that are done in FPGAs tend to = be fairly complex. I have done a number of projects in FPGAs that most of = the work could have been done in a CPU and the FPGA project went rather qui= ckly.=20 > It's not really any different in that respect than it was fifteen years a= go. The market would be so much bigger if things were just a bit easier. = I haven't figured out if this is because it's an inherently difficult probl= em or if the tool designers aren't that skilled. A little of both, I think= . I still think the people working on HLS are going in the wrong direction= and need to spend more time refining the HDL tools. They aim too high and= fail utterly. Why can't I infer a FIFO? Just start with that. The amoun= t of time I spend on very basic structures is crazy. I don't think any of = this will change in the near future, though. =20 I don't know, why can't *you* infer a fifo? The code required is not compl= ex. Are you saying you feel you have to instantiate a vendor module for th= at??? I recall app notes from some time ago that explained how to use gray= counters to easily infer fifos. Typically the thing that slows me down in= HDL is the fact that I'm using VHDL with all it's verbosity. Some tools h= elp with that, but I don't have those. I've just never bitten the bullet t= o try working much in Verilog.=20 Rick C.Article: 160718
On Friday, October 26, 2018 at 10:25:57 PM UTC-4, Kevin Neilson wrote: > > That may be, but expired patents aren't really significant. The basic = functionality of the LUT/FF and routing have been available for quite some = time now. The details of FPGA architectures only matter when you are compe= ting head to head. That's why Silicon Blue focused on a market segment tha= t was ignored by the big players. The big two chase the telecom market wit= h max capacity, high pin count barn burners and the other markets are addre= ssed with the same technology making it impossible to compete in the low po= wer areas. In the end no significant user who is considering an iCE40 part= even looks at a part from Xilinx or Altera.=20 > >=20 > > Rick C. >=20 > I don't do hobbyist stuff anymore since I'm too busy with work but I woul= d think one could just use eval boards. I don't know why a DIP would be re= quired. I don't know about the cost of the tools for a hobbyist, though. Tools are zero cost, no? I bought tools once, $1500 I believe. Ever since= I just use the free versions.=20 > As for the high pin counts, I would think that the need would be mitigate= d with all the high-speed serial interfaces. Uh, tell that to Xilinx, Altera, Lattice and everyone else (which I guess m= eans Microsemi). Lattice has some low pin count parts for the iCE40 line, = but they are very fine pitch BGA type devices which are hard to route. Oth= erwise the pin counts tend to be much higher than what I consider to be a s= imilar MCU if not high pin count by all measures. =20 Rick C.Article: 160719
> I think this is rather exaggerated and comparing to Matlab coding is a bi= t disingenuous. Are many projects coded in Matlab then they are done???=20 >=20 > Coding in HDL is not really much different from coding in any high level = language unless you have significant speed or capacity issues. Even then t= hat is not much different from programming CPUs. If you are short on CPU s= peed or memory, you code very differently and spend a lot of time validatin= g your goals at every step.=20 >=20 It's not necessarily that it's in Matlab that makes it easy, but that it's = very abstracted. It might not be that much harder in abstract SystemVerilo= g. What takes months is converting to a parallelized design, adding pipeli= ning, meeting timing, placing, dealing with domain crossings, instantiating= primitives when necessary, debugging, etc. The same would be true of any = language. I suppose you can get an FPGA written in C to work as well, but = it's not going to be *abstract* C. It's going to be the kind of C that loo= ks like assembly, in which the actual algorithm is indiscernible without ex= tensive comments. > Even then I have worked on a number of projects where the design was size= and/or speed constrained and it wasn't a debilitating burden... just like = the CPU coding projects I've worked on. =20 >=20 > I think the real difference is in the tasks that are done in FPGAs tend t= o be fairly complex. I have done a number of projects in FPGAs that most o= f the work could have been done in a CPU and the FPGA project went rather q= uickly.=20 >=20 >=20 > I don't know, why can't *you* infer a fifo? The code required is not com= plex. Are you saying you feel you have to instantiate a vendor module for = that??? I recall app notes from some time ago that explained how to use gr= ay counters to easily infer fifos. Typically the thing that slows me down = in HDL is the fact that I'm using VHDL with all it's verbosity. Some tools= help with that, but I don't have those. I've just never bitten the bullet= to try working much in Verilog.=20 >=20 > Rick C. True--I instantiate FIFOs, but they themselves are actually written in HDL,= Gray counters and all, though often the RAMs are instantiated for various = reasons. (If you want to use Xilinx's hard FIFOs, I believe you have to in= stantiate those.) What I meant to say was that they should be, as a common= ly-used element, much more abstracted. I ought to be able to do it as a fu= nction call, such as: if (wr_en && fifo.size<256) fifo.push_front(wr_data); I *can* do that, in SystemVerilog simulation, but any synthesizer would jus= t scoff at that, though I don't see why it should be impossible to turn tha= t into a FIFO. Forcing designers to know about Gray counters and clock-dom= ain crossings means that FPGA design will continue to be a recondite art li= mited to the few.Article: 160720
On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote: > > I think this is rather exaggerated and comparing to Matlab coding is a = bit disingenuous. Are many projects coded in Matlab then they are done???= =20 > >=20 > > Coding in HDL is not really much different from coding in any high leve= l language unless you have significant speed or capacity issues. Even then= that is not much different from programming CPUs. If you are short on CPU= speed or memory, you code very differently and spend a lot of time validat= ing your goals at every step.=20 > >=20 > It's not necessarily that it's in Matlab that makes it easy, but that it'= s very abstracted. It might not be that much harder in abstract SystemVeri= log. What takes months is converting to a parallelized design, adding pipe= lining, meeting timing, placing, dealing with domain crossings, instantiati= ng primitives when necessary, debugging, etc. The same would be true of an= y language. I suppose you can get an FPGA written in C to work as well, bu= t it's not going to be *abstract* C. It's going to be the kind of C that l= ooks like assembly, in which the actual algorithm is indiscernible without = extensive comments. And that is exactly my point. The problem you point out is not a problem r= elated in any way to implementing in FPGAs, it's that the design is inheren= tly complex. While you may be able to define the design in an abstract way= in Matlab, that is not the same thing as an implementation in *any* medium= or target.=20 Your claim was, "the whole FPGA market is limited because any real work mus= t be done by experiences specialists working at a very low level of abstrac= tion". This isn't a problem with the FPGA aspect, it is a problem with the= task being implemented since it would be the same problem with any target.= =20 > > Even then I have worked on a number of projects where the design was si= ze and/or speed constrained and it wasn't a debilitating burden... just lik= e the CPU coding projects I've worked on. =20 > >=20 > > I think the real difference is in the tasks that are done in FPGAs tend= to be fairly complex. I have done a number of projects in FPGAs that most= of the work could have been done in a CPU and the FPGA project went rather= quickly.=20 > >=20 > >=20 > > I don't know, why can't *you* infer a fifo? The code required is not c= omplex. Are you saying you feel you have to instantiate a vendor module fo= r that??? I recall app notes from some time ago that explained how to use = gray counters to easily infer fifos. Typically the thing that slows me dow= n in HDL is the fact that I'm using VHDL with all it's verbosity. Some too= ls help with that, but I don't have those. I've just never bitten the bull= et to try working much in Verilog.=20 > >=20 > > Rick C. >=20 > True--I instantiate FIFOs, but they themselves are actually written in HD= L, Gray counters and all, though often the RAMs are instantiated for variou= s reasons. (If you want to use Xilinx's hard FIFOs, I believe you have to = instantiate those.) What I meant to say was that they should be, as a comm= only-used element, much more abstracted. I ought to be able to do it as a = function call, such as: >=20 > if (wr_en && fifo.size<256) fifo.push_front(wr_data); >=20 > I *can* do that, in SystemVerilog simulation, but any synthesizer would j= ust scoff at that, though I don't see why it should be impossible to turn t= hat into a FIFO. Forcing designers to know about Gray counters and clock-d= omain crossings means that FPGA design will continue to be a recondite art = limited to the few. I'm not sure how you can do that in any language unless fifo.push_front() i= s already defined. Are you suggesting it be a part of a language? In C th= ere are many libraries for various commonly used functions. In VHDL there = are some libraries for commonly used, but low level functions, nothing like= a fifo. If you write a procedure to define fifo.push_front() you can do e= xactly this, but there is none written for you. =20 Rick C.Article: 160721
On 27/10/2018 02:48, Kevin Neilson wrote: >> https://www.xilinx.com/video/hardware/vivado-hls-in-depth-technical-overview.html >> >> After watching this consider what a top of the range HLS tool can do. >> > > I know you're joking about HLS but the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction. wow....HLS is not perfect by far and you still need to have RTL knowledge but I think your understanding of HLS is about 10 years in the past. Can I suggest you do a bit of googling to see what the current state is of HLS. It really does take months to do what you can do in Matlab in a couple of hours. yes but to be fair the power comes from some powerful library functions and not from basic m-code. Equally, it takes very little effort to instantiate some very complex IP cores. It's not really any different in that respect than it was fifteen years ago. The market would be so much bigger if things were just a bit easier. I haven't figured out if this is because it's an inherently difficult problem or if the tool designers aren't that skilled. A little of both, I think. It is a complex problem, the EDA industry if worth many billions so there is no lack of financial incentive to develop these tools. I still think the people working on HLS are going in the wrong direction and need to spend more time refining the HDL tools. That is what they are doing by removing the time and architectural requirements of the input design. I agree that I would have preferred they used another language than C/C++ but at least the simulation is very very fast compared to RTL. They aim too high and fail utterly. No they don't, companies like Google, Nvidia, Qualcomm and many more are all very successful with HLS tools. Why can't I infer a FIFO? Just start with that. Again, look at Vivado HLS, they have full support for FIFO's and can infer one from a stream array. The amount of time I spend on very basic structures is crazy. Why, write ones, stick in a library and instantiate as often as you like. Most synthesis tools are pretty good at inferring the required memory type. I don't think any of this will change in the near future, though. > > (PS, I know you were probably actually serious about HLS.) I am, I am just surprised that you have so a low appreciation of the current technology. HLS is happening but it will be a many decades before our skill set becomes obsolete (assuming we don't keep up). Hans www.ht-lab.comArticle: 160722
On 27/10/2018 07:22, gnuarm.deletethisbit@gmail.com wrote: > On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote: .. >> It's not necessarily that it's in Matlab that makes it easy, but that it's very abstracted. It might not be that much harder in abstract SystemVerilog. What takes months is converting to a parallelized design, adding pipelining, meeting timing, placing, dealing with domain crossings, instantiating primitives when necessary, debugging, etc. The same would be true of any language. I suppose you can get an FPGA written in C to work as well, but it's not going to be *abstract* C. It's going to be the kind of C that looks like assembly, in which the actual algorithm is indiscernible without extensive comments. > > And that is exactly my point. The problem you point out is not a problem related in any way to implementing in FPGAs, it's that the design is inherently complex. While you may be able to define the design in an abstract way in Matlab, that is not the same thing as an implementation in *any* medium or target. > > Your claim was, "the whole FPGA market is limited because any real work must be done by experiences specialists working at a very low level of abstraction". This isn't a problem with the FPGA aspect, it is a problem with the task being implemented since it would be the same problem with any target. > Well said. .. >>> >>> I don't know, why can't *you* infer a fifo? The code required is not complex. Are you saying you feel you have to instantiate a vendor module for that??? I recall app notes from some time ago that explained how to use gray counters to easily infer fifos. Typically the thing that slows me down in HDL is the fact that I'm using VHDL with all it's verbosity. really, what aspect of VHDL is slowing you down that would be quicker in Verilog? www.synthworks.com/papers/VHDL_2008_end_of_verbosity_2013.pdf Personally I think verbosity is a good thing as it makes it easier to understand somebody else's code. >Some tools help with that, but I don't have those. I've just never bitten the bullet to try working much in Verilog. I would forget about Verilog as it has too many quirks, go straight to SystemVerilog (or just stick with VHDL). Hans www.ht-lab.comArticle: 160723
HT-Lab <hans64@htminuslab.com> wrote: > On 27/10/2018 07:22, gnuarm.deletethisbit@gmail.com wrote: > > On Saturday, October 27, 2018 at 12:30:55 AM UTC-4, Kevin Neilson wrote: > >>> I don't know, why can't *you* infer a fifo? The code required is not > >>> complex. Are you saying you feel you have to instantiate a vendor > >>> module for that??? I recall app notes from some time ago that > >>> explained how to use gray counters to easily infer fifos. Typically > >>> the thing that slows me down in HDL is the fact that I'm using VHDL > >>> with all it's verbosity. > > really, what aspect of VHDL is slowing you down that would be quicker in > Verilog? > > www.synthworks.com/papers/VHDL_2008_end_of_verbosity_2013.pdf > > Personally I think verbosity is a good thing as it makes it easier to > understand somebody else's code. I'm don't have much to do with VHDL, but that sounds like it's making a bad thing slightly less bad. I'd be interested if you could point me towards an example of tight VHDL? The other issue that that a lot of these updated VHDL and Verilog standards take a long time to make it into the tools. So if you code in a style that's above the lowest common denominator, you're now held hostage about using the particular tool that supports your chosen constructs. There's another type of tool out there, that compiles to Verilog as its 'assembly language'. Basic register-transfer Verilog is pretty universally supported, and so they support most toolchains. As regards FIFOs, here's a noddy example: import FIFO::*; interface Pipe_ifc; method Action send(Int#(32) a); method ActionValue#(Int#(32)) receive(); endinterface module mkDoubler(Pipe_ifc); FIFO#(Int#(32)) firstfifo <- mkFIFO; FIFO#(Int#(32)) secondfifo <- mkFIFO; rule dothedoubling; let in = firstfifo.first(); firstfifo.deq; secondfifo.enq ( in * 2 ); endrule method Action send(Int#(32) a); firstfifo.enq(a); endmethod method ActionValue#(Int#(32)) receive(); let result = secondfifo.first(); secondfifo.deq; return result; endmethod endmodule This creates a module containing two FIFOs, with a standard pipe interface - a port for sending it 32 bit ints, and another for receiving 32 bit ints back from it. Inside, one FIFO is wired to the input of the module, the other to the output. When data comes in, it's stored in the first FIFO. When there is space in the second FIFO, it's dequeued from the first, doubled, and enqueued in the second. If any FIFO becomes full, backpressure is automatically applied. There's no chance of data getting lost by missing control signals. This is Bluespec's BSV, not VHDL or Verilog. The compiler type checked it for me, so I'm very confident it will work first time. I could have made it polymorphic (there's nothing special about 32 bit ints here) with only a tiny bit more work. It compiles to Verilog which I can then synthesise. Notice there are no clocks or resets (they're implicit unless you say you want multiple clock domains), no 'if valid is high then' logic, it's all taken care of. This means you can write code that does a lot of work very concisely. TheoArticle: 160724
On 27/10/2018 04:38, gnuarm.deletethisbit@gmail.com wrote: > On Friday, October 26, 2018 at 10:25:57 PM UTC-4, Kevin Neilson wrote: >>> That may be, but expired patents aren't really significant. The basic functionality of the LUT/FF and routing have been available for quite some time now. The details of FPGA architectures only matter when you are competing head to head. That's why Silicon Blue focused on a market segment that was ignored by the big players. The big two chase the telecom market with max capacity, high pin count barn burners and the other markets are addressed with the same technology making it impossible to compete in the low power areas. In the end no significant user who is considering an iCE40 part even looks at a part from Xilinx or Altera. >>> >>> Rick C. >> >> I don't do hobbyist stuff anymore since I'm too busy with work but I would think one could just use eval boards. I don't know why a DIP would be required. I don't know about the cost of the tools for a hobbyist, though. > > Tools are zero cost, no? I bought tools once, $1500 I believe. Ever since I just use the free versions. > > >> As for the high pin counts, I would think that the need would be mitigated with all the high-speed serial interfaces. > > Uh, tell that to Xilinx, Altera, Lattice and everyone else (which I guess means Microsemi). Lattice has some low pin count parts for the iCE40 line, but they are very fine pitch BGA type devices which are hard to route. Otherwise the pin counts tend to be much higher than what I consider to be a similar MCU if not high pin count by all measures. > > Rick C. > Lattice have some ice40 series parts in 48pin 0.5mm pitch QFN, easily hotplate and hand solderable. Altera have MAX10 with 50kLUTs (approx) in 144 pin TQFP. There are no modern Xilinx parts in other than BGA but you can get Artix and Spartan 7 in 1mm pitch BGA. Should be possible to use on low cost (0.15mm track and gap) 4 layer boards. I mean to try quite soon - I'll let you know. MK
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z