Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi Harnhua, harnhua@plunify.com wrote: > My name is Harnhua; I'm one of Plunify's founders and I just wanted to > chime in on a much-loved topic. good that you guys are following this group. I wish more people from EDA industry /lurked around/ here instead of trenching themselves behind their user-forums. > @Hans: [] > Do you wonder why the big 3 aren't moving to the cloud? They have > tried, and are still trying, but their current sales organizations are > simply too deeply entrenched in their lucrative all-you-can-eat models > to ever want to change. That's a big obstacle they have to overcome. maybe they do not see their share in the business model you are presenting and therefore are simply not interested. What you present as a win-win solution might not be regarded as such by them. What in your opinion is their advantage in letting you get in this business as a gateaway to their tools? EDA industry has always worked on the principle to /federate/ users [1] with their tools to provide them what they need to ship a product early and cheaply. Now comes your idea that says: 'to hell with their tools, we offer ours on *top* of them and the user does no longer need to deal with X or A suites'. Wouldn't this approach scare them? > No less than Joe Costello, former CEO of Cadence, has commented that > in the end, only customers can lead the change, not the big 3. And > people wonder why the EDA and chip design industry are perceived as > "sunset industries" nowadays, steadily losing all the innovation we > once had. It is because of the fear of change. Or maybe because they do not see where is their share in this change. > Plunify is our effort to try to affect a change, but it is not change > for the sake of changing. More and more customers are seeing the > benefits of offloading compute-intensive tasks to pay-by-the-hour > datacenters. As an end user I'd say that a pay-by-the-hour is the best solution, but what is the tools vendors gain with this model? > @Sean: > > I understand what you said about not having designs that are big > enough to require a server farm, and agree that it is mostly trust, > not (merely) technology that is the issue with entrusting a 3rd-party > in another country with your confidential data. The point is not only trust, rather liability. Who would be liable for a leak of information? And how much would that leak cost to the customer as well as the provider? [] > For the record, we do encrypt your data before > it is stored, if you want to store it. Problem with encrypting storage is where the decryption key is stored. Can you imagine how catastrophic would be a scenario where you *lost* the decryption key or it has been forged somehow? And regarding secure transmission I hope you guys are aware of this: https://www.openssl.org/news/secadv_20140407.txt [] > We spend all our time making sure that our platform is secure and that > supported design flows function smoothly at scale. there's only one way to make sure your platform is secure: make it open. This of course might not be a viable solution for you, but that is the only solution that would help you build trust and even then, the heartbleed bug I referred to earlier is a clear example where security might be a showstopper for your model. Saying you have put a lot of effort is *all relative*. No matter how skilled are your network experts and software developers, only a large user base and a large amount of 'eyes' can guarantee a level of trust sufficient to migrate on the cloud. Be sure though that I second your attempt to provide such a service. Even though I would be more keen to support you if you were adopting an open standard like OpenStack instead of the AWS. p.s.: I suggest next time, for readability purposes, you quote text from others in order to avoid the need to switch back and forth to understand what they have said. This is what 'quoting' is for.Article: 156501
Hi everyone, I had my vbox running with one core only (out of 4 on the host) and Designer was running extremely slow, so I thought 'what the heck' I'm not doing much with the other 3 cores on my host, so maybe I'll throw another core to my vbox so that I can run Designer faster. What a wishful thinking... designer_bin goes up to 50% only of my cpu resources, clearly hinting that it does not know how to run on a multicore system. Is this possible? Is there anyway to bypass this problem? Thanks a lot, Al -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 156502
On 10/04/2014 10:31, alb wrote: > Hi Hans, > In comp.arch.fpga HT-Lab <hans64@htminuslab.com> wrote: > [] >>> thanks for the pointer. I actually knew about that forum, but I kind of >>> dislike web-forums in general and I'd prefer to read and post articles >>> through my newsreader rather than a clumsy web-interface. >>> >>> I also do not like the idea to splitting communities >> >> I fully agree with you, unfortunately setting up say comp.lang.systemc >> is a lengthy process. >> > > I think the process is not so long and I can even volunteer to write up > an RFD but the real point is whether there will be enough traffic to > justify a new group. I guess I'll need to show up on those forums first > and get a feeling whether a group will be useful or not. I know of > certain forums which map one to one newsgroup content in order to allow > user to choose their favourite interface, without breaking the > community, but this will require some efforts on the accellera side. > >> Having said that, the SystemC forums are actually quite good and this is >> all thanks to a handfull of experts who answers all questions. > > The added value of having a newsgroup is the possibility to cross-post > efficiently especially when you want two languages to interact. I'm > considering the possibility to have my model written in SystemC while > the testbench written in vhdl, leveraging the benefits of the OSVVM > library. That is unusual, I suspect you are better off using SCV as you might hit some mix language interface issues (records are not always straightforward on a SC/VHDL interface, use simple structs on SC only). > > It might be that systemc forums are equally followed by lots of vhdl > experts, definitely, Alan Fitch is a language guru of both SystemC and VHDL and he answers a lot of questions on the forum. but IMHO the possibility to merge the two world on usenet > greatly improves the quality of the thread. > > Al > Regards, Hans. www.ht-lab.comArticle: 156503
On Friday, April 11, 2014 1:04:47 PM UTC+3, alb wrote: > > Now comes your idea that says: 'to hell with their tools, we offer ours > on *top* of them and the user does no longer need to deal with X or A > suites'. Wouldn't this approach scare them? > By "big 3" Harnhua probable meant Synopsys, Cadence and Mentor Graphics.Article: 156504
Hi Harnhua, On 10/04/2014 16:48, harnhua@plunify.com wrote: > Hi, > > My name is Harnhua; I'm one of Plunify's founders and I just wanted to chime in on a much-loved topic. > > @alb: > > Naturally we agree wholeheartedly with what you said. If you are willing and interested, let's have a chat and maybe we can propose a trial project with the vendor to implement the system you envisioned? > > @Hans: > > I respectfully disagree with your comments - we are proving that it is a viable business model. I am glad this is the case, I have nothing against cloud based services and I always applaud new startup doing something new. However, you must agree with me that having just 2 tool vendors on-board after 4(?) years gives the impression you are not doing so well. You show ISE in your demo but don't list them on the tools page? I also wondered why you are going after the FPGA market were large regression test are less common. > > Do you wonder why the big 3 aren't moving to the cloud? They have tried, and are still trying, but their current sales organizations are simply too deeply entrenched in their lucrative all-you-can-eat models to ever want to change. That's a big obstacle they have to overcome. I would say it is an obstacle you have to overcome. The EDA vendor is not going to change their pricing model if they get less money. Most EDA vendors have a flexible license scheme, you can get 1 or 3 month term licenses from most of them. Setting up a small in-house regression farm (assuming FPGA user) is also not a major undertaking. It basically all comes down to how many simulation licenses you willing to pay for and not the setup. > Only the smaller EDA companies see the benefits and are more agile But unfortunately for the big clients you need the big EDA tools. Nobody is going to sign-off their $10M ASIC on a new unproven tool. Also the amount of investment and know-how required to enter this market is enormous and even if you have something good you will be snapped up by one of the big three so you are back to square one. >-- we are working with a few who wish to remain anonymous for now. I would put some pressure on them to endorse you. > No less than Joe Costello, former CEO of Cadence, has commented that in the end, only customers can lead the change, not the big 3. And people wonder why the EDA and chip design industry are perceived as "sunset industries" nowadays, steadily losing all the innovation we once had. It is because of the fear of change. I would say its al about $$$ > > Plunify is our effort to try to affect a change, but it is not change for the sake of changing. More and more customers are seeing the benefits of offloading compute-intensive tasks to pay-by-the-hour datacenters. We've been inspired by seeing cases where an engineering team accomplished in 3 days tasks that would have taken them a whole month to do! > > If you don't mind, may I understand more of your concerns with using a cloud service? That will help educate us on how we can improve. I have no doubt that cloud services will become more and more popular. It is just that for the moment I don't see a viable business model especially if you are going after the FPGA community. My Mentor Precision tool has a feature called Precise-Explore which enables me start 16 simultaneous synthesis and P&R runs all from the comfort of my desktop I hope I am not too negative and I would be happy if you would prove me wrong, Good luck with your business, Hans www.ht-lab.com > > @Sean: > > I understand what you said about not having designs that are big enough to require a server farm, and agree that it is mostly trust, not (merely) technology that is the issue with entrusting a 3rd-party in another country with your confidential data. > > The point I'd like to make is, why judge us before you even get to know us? One of our first suggestions to customers is to NOT share their latest and greatest designs if they don't feel good doing so, and just offload their tedious tasks on stable designs, whatever they are comfortable with. For the record, we do encrypt your data before it is stored, if you want to store it. We are Singapore-based, and you can choose European servers if you are concerned about your data falling under US jurisdiction. And we're regular engineers and software developers. Do let's have a chat if you're interested in finding out more. > > We spend all our time making sure that our platform is secure and that supported design flows function smoothly at scale. This is what's putting bread on the table so you can be sure that we will move heaven and earth to make our stuff work as customers'd expect it to. As cheesy as it might sound, the belief that Plunify can contribute to our industry as a whole also keeps our team motivated. > > Cheers, > Harnhua >Article: 156505
"Jan Panteltje" <pNaonStpealmtje@yahoo.com> wrote in message news:li81iu$qr0$1@news.albasani.net... > On a sunny day (Thu, 10 Apr 2014 21:25:35 +0200) it happened Gerhard > Hoffmann > <ghf@hoffmann-hochfrequenz.de> wrote in > <bqo9hfF3jv2U1@mid.individual.net>: > > played around with a spectrum analyzer & tracking gen. and >>noted the results, waiting for Xilinx ISE some years ago. >> >>Those funny sockets are at he end: >>< >>http://www.hoffmann-hochfrequenz.de/downloads/experiments_with_decoupling_capacitors.pdf >> > > > Thank you, that is good info, I can do something with that. > But tantals are not that bad? Just too big? Tants are to be used in conjunction with smaller caps, yes. And smaller caps in conjunction with a ground plane, if possible. The ground plane resonates with the ceramic cap at the highest frequency, and the ceramic caps between each other at middle frequencies, but the tant's ESR dampens them all out. If you're allergic to tant, you might use a fat ceramic with a series resistor (which will have better manufacturing tolerances too), but you need to be aware of voltage coefficient and aging if the value is at all critical (e.g., dominant pole on one of those crummy switchers or LDOs). I'm disappointed by the last test in that link: what should've been done is, two short male pins soldered to the board, and the socket plugged onto it, face down -- not standing proud on more lead length than a wire wrapped assembly! Tim -- Seven Transistor Labs Electrical Engineering Consultation Website: http://seventransistorlabs.comArticle: 156506
sagarmemane4@gmail.com wrote: > ERROR:MapLib:93 - Illegal LOC on IPAD symbol "autman" or BUFGP symbol > "autman_BUFGP" (output signal=autman_BUFGP), IPAD-IBUFG should only be > LOCed to GCLKIOB site. > > > same error > > how to solve this problem > > i am using 9.1 ver. Global clocks can only be input at specific pins of an FPGA. If you absolutely MUST use a non-global clock pin, then you need to route it through the fabric, and the timing will be less well controlled. JonArticle: 156507
Hi alb, Our focus is actually on FPGA design, and on making all the extra resources= in grid/cloud computing produce better quality of results in a shorter tim= e -- that I think is the main attraction for people to consider an external= compute resource. Therefore my view of EDA is more generic in nature. Other much more qualifi= ed people have expressed their opinions on the current state of the industr= y, but it seems like more and more end-customers are unhappy with EDA tool = sales/licensing models, and have been for a while. But it's gotten to the p= oint where the industry itself appears to suffer--investment money has drie= d up, the number of "big exits" is declining, fewer and fewer EDA startups = are being created, and so on. > maybe they do not see their share in the business model you are=20 > presenting and therefore are simply not interested. What you present as= =20 > a win-win solution might not be regarded as such by them. You're probably right. I think they must see problems, but at the same time= , are perhaps searching for the "right models" to remain as profitable as b= efore? > What in your opinion is their advantage in letting you get in this=20 > business as a gateaway to their tools? EDA industry has always worked on= =20 > the principle to /federate/ users [1] with their tools to provide them wh= at=20 > they need to ship a product early and cheaply. >=20 > Now comes your idea that says: 'to hell with their tools, we offer ours= =20 > on *top* of them and the user does no longer need to deal with X or A=20 > suites'. Wouldn't this approach scare them? On the contrary, we'd like customers to use the current FPGA tools as is, a= nd only offload compute-intensive processes to a grid/cloud computing envir= onment that we help set up. This is so that builds can be scaled up instant= ly in a guided way to get to design goals. Something like suddenly having "= 1000 more servers trying strategies to get to timing closure in a day inste= ad of in weeks." We develop plugins, not replacements. =20 > > No less than Joe Costello, former CEO of Cadence, has commented that=20 > > in the end, only customers can lead the change, not the big 3. And=20 > > people wonder why the EDA and chip design industry are perceived as=20 > > "sunset industries" nowadays, steadily losing all the innovation we=20 > > once had. It is because of the fear of change. >=20 > Or maybe because they do not see where is their share in this change. Yes, in their own way, large as they are, the big EDA 3 face many challenge= s. Opening up the industry, as you mentioned, is great for the end-user but= it must work for the vendors too. For instance, every vendor seems to want= an ecosystem, but only around their tools... > As an end user I'd say that a pay-by-the-hour is the best solution, but= =20 > what is the tools vendors gain with this model? Potentially, a great deal more users than before is my claim--the so-called= "long tail". They can still charge high hourly prices, and have been doing= so for the large customers. Although tool purchase is far from the major c= ost component of making a chip, it does affect the entire workflow. Further= more, the population of dabblers, hobbyists, students who will eventually r= eplenish and sustain the industry will benefit from pay-as-you-use models a= nd portability. I'm not saying anything new here, am I? ; ) > The point is not only trust, rather liability. Who would be liable for a= =20 > leak of information? And how much would that leak cost to the customer=20 > as well as the provider? As you mentioned earlier, people routinely FTP their files to the foundries= . Who is liable for a leak in those cases, I wonder? (I don't know) I think= a good approach is what companies like OneSpin are doing--make online tool= s that don't require users to transfer source code. > Problem with encrypting storage is where the decryption key is stored.=20 > Can you imagine how catastrophic would be a scenario where you *lost*=20 > the decryption key or it has been forged somehow? >=20 > And regarding secure transmission I hope you guys are aware of this: > https://www.openssl.org/news/secadv_20140407.txt You're describing the kind of technical problem for which I think there is = a infinite loop of questions and answers. For example, to get a decryption = key, one would have to break into the system first. To break into a system,= one would have to first find the system. Security vulnerabilities are pres= ent in company networks as well as in Amazon. My (biased) view is that comp= anies like AWS who rely on the cloud for a living have the most incentive t= o make their systems secure. Of course we patched ours as soon as we found = out about vulnerabilities like the one you posted. This is also probably wh= ere proper customer education and IT audits come into play. Doubters will a= lways doubt--there will be people who will never contemplate using a cloud = service for chip design, but I think with the proper communication and tech= nology, more and more companies will turn to a "cloud" way of designing. > there's only one way to make sure your platform is secure: make it open.= =20 > This of course might not be a viable solution for you, but that is the=20 > only solution that would help you build trust and even then, the=20 > heartbleed bug I referred to earlier is a clear example where security=20 > might be a showstopper for your model. I like your statement very much. Ideally, tools can export non-reversible d= ata so that users don't have to upload source code. > Saying you have put a lot of effort is *all relative*. No matter how=20 > skilled are your network experts and software developers, only a large=20 > user base and a large amount of 'eyes' can guarantee a level of trust=20 > sufficient to migrate on the cloud. True; it's a chicken-and-egg situation which we can address only by working= with customers. > Be sure though that I second your attempt to provide such a service.=20 > Even though I would be more keen to support you if you were adopting an= =20 > open standard like OpenStack instead of the AWS. Thank you for your thoughts and feedback! > p.s.: I suggest next time, for readability purposes, you quote text from= =20 > others in order to avoid the need to switch back and forth to understand= =20 > what they have said. This is what 'quoting' is for. Done! Cheers, HarnhuaArticle: 156508
"John Larkin" wrote in message news:rre8k917ocetahkdskanqtvg0pc1gp91kv@4ax.com... On Tue, 08 Apr 2014 13:39:31 -0400, Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote: >Cool, however they do it. Every chip should have internal bypasses. Switch 100 Amps in 50ps for a 1V uP, then figure how much bond wire inductance would be allowed if there was no interleaved capacitance with the logic on chip:-) Kevin Aylward www.kevinaylward.co.uk www.anasoft.co.uk - SuperSpiceArticle: 156509
In comp.arch.fpga Kevin Aylward <ExtractkevinRemove@kevinaylward.co.uk> wrote: (snip) > Switch 100 Amps in 50ps for a 1V uP, then figure how much bond wire > inductance would be allowed if there was no interleaved capacitance > with the logic on chip:-) Well, most now have separate power and ground for IO and core, usually at a different voltage, but yes. Though there is likely enough clock skew that they don't all switch in the same 50ps. -- glenArticle: 156510
Hi Hans, Thank you for your thoughts! > I am glad this is the case, I have nothing against cloud based services= =20 > and I always applaud new startup doing something new. However, you must= =20 > agree with me that having just 2 tool vendors on-board after 4(?) years= =20 > gives the impression you are not doing so well. You show ISE in your=20 > demo but don't list them on the tools page? Absolutely agree with you. Building a good EDA platform is an uphill task, = and there are so many factors involved to make it successful. So far, what'= s been mentioned are vendor relationships/motivations, end-user requirement= s and security. Another one is that the tools themselves are not designed t= o take advantage of a cloud-like infrastructure. Over the years, engineers = have learned to overcome design problems through other means, because not e= veryone has access to compute resources. Even with ample servers, the perce= ived gain that the tools bring aren't outweighing the perceived costs, if t= hat makes sense. So to build a good FPGA/EDA problem, we just have to chip away bit by bit a= t the different problems. Some flows we have in beta, some we can and canno= t release, so please bear with us and work with us if you have a demand for= specific tools and flows. > I also wondered why you are going after the FPGA market were large=20 > regression test are less common. Because we believe that we can use existing FPGA tools to get better qualit= y of results, on top of the "convenience" benefits of a platform. > I would say it is an obstacle you have to overcome. The EDA vendor is=20 > not going to change their pricing model if they get less money. Most EDA= =20 > vendors have a flexible license scheme, you can get 1 or 3 month term=20 > licenses from most of them. Setting up a small in-house regression farm= =20 > (assuming FPGA user) is also not a major undertaking. It basically all=20 > comes down to how many simulation licenses you willing to pay for and=20 > not the setup. : ) Yes, I concur what you wrote here for what I tend to call the "convenience"= benefits. > But unfortunately for the big clients you need the big EDA tools. Nobody= =20 > is going to sign-off their $10M ASIC on a new unproven tool. Also the=20 > amount of investment and know-how required to enter this market is=20 > enormous and even if you have something good you will be snapped up by=20 > one of the big three so you are back to square one. > I would put some pressure on them to endorse you. > I would say its al about $$$ This, I think, has some implications on the ASICs vs FPGAs question and is = part of the challenge that small FPGA/EDA startups (including us) have to s= olve. However it doesn't have to be "small company versus big company" thin= g. It sounds a bit too idealistic perhaps, but I really think all vendors a= nd the customers need to work together to address this, so that the amount = of $ to be made can grow as a whole. > I have no doubt that cloud services will become more and more popular.=20 > It is just that for the moment I don't see a viable business model=20 > especially if you are going after the FPGA community. >=20 > My Mentor Precision tool has a feature called Precise-Explore which=20 > enables me start 16 simultaneous synthesis and P&R runs all from the=20 > comfort of my desktop From the infrastructure perspective, it's great that you have the hardware = and licenses to do this, and it's great if this is all that you need -- the= se are the "convenience" benefits that I mentioned above. From a quality of results point of view, what if I say that by running our = tool from your desktop to run 16 simultaneous synthesis and P&R runs in you= r server farm / local PC / AWS, you can actually improve your synthesis and= P&R results? Is that a better value proposition from your perspective? (That's what we're working on.) > I hope I am not too negative and I would be happy if you would prove me= =20 > wrong, >=20 > Good luck with your business, Thank you! Cheers, Harnhua >=20 >=20 >=20 > Hans >=20 > www.ht-lab.com >=20 >=20 >=20 >=20 >=20 > > >=20 > > @Sean: >=20 > > >=20 > > I understand what you said about not having designs that are big enough= to require a server farm, and agree that it is mostly trust, not (merely) = technology that is the issue with entrusting a 3rd-party in another country= with your confidential data. >=20 > > >=20 > > The point I'd like to make is, why judge us before you even get to know= us? One of our first suggestions to customers is to NOT share their latest= and greatest designs if they don't feel good doing so, and just offload th= eir tedious tasks on stable designs, whatever they are comfortable with. Fo= r the record, we do encrypt your data before it is stored, if you want to s= tore it. We are Singapore-based, and you can choose European servers if you= are concerned about your data falling under US jurisdiction. And we're reg= ular engineers and software developers. Do let's have a chat if you're inte= rested in finding out more. >=20 > > >=20 > > We spend all our time making sure that our platform is secure and that = supported design flows function smoothly at scale. This is what's putting b= read on the table so you can be sure that we will move heaven and earth to = make our stuff work as customers'd expect it to. As cheesy as it might soun= d, the belief that Plunify can contribute to our industry as a whole also k= eeps our team motivated. >=20 > > >=20 > > Cheers, >=20 > > Harnhua >=20 > >Article: 156511
Hi Harnhua, harnhua@plunify.com wrote: >> What in your opinion is their advantage in letting you get in this >> business as a gateaway to their tools? EDA industry has always worked on >> the principle to /federate/ users [1] with their tools to provide them what >> they need to ship a product early and cheaply. >> Now comes your idea that says: 'to hell with their tools, we offer ours >> on *top* of them and the user does no longer need to deal with X or A >> suites'. Wouldn't this approach scare them? [] > On the contrary, we'd like customers to use the current FPGA tools as > is, and only offload compute-intensive processes to a grid/cloud > computing environment that we help set up. This is so that builds can > be scaled up instantly in a guided way to get to design goals. There might be some technical limitation with some tools. For example I seem to be stuck with using only one core for Designer (par tool from Microsemi) and virtualizing 1000 cores wouldn't do much, unless I'm running 1000 parallel processes with 1000 different sets of parameters... > Something like suddenly having "1000 more servers trying strategies to > get to timing closure in a day instead of in weeks." We develop > plugins, not replacements. I'm curious about this, I did not know tools like those can be extended with plug-ins. Do such feature show on all tools? Can you really plug-in estentions to simulators, synthesizers, etc.? [] >> > people wonder why the EDA and chip design industry are perceived as >> > "sunset industries" nowadays, steadily losing all the innovation we >> > once had. It is because of the fear of change. >> >> Or maybe because they do not see where is their share in this change. > > Yes, in their own way, large as they are, the big EDA 3 face many > challenges. Opening up the industry, as you mentioned, is great for > the end-user but it must work for the vendors too. For instance, every > vendor seems to want an ecosystem, but only around their tools... Because their lock-in strategies come from a distorted view of profit. Hystory has already shown what are the consequences of such strategies (see 'Unix wars'). Unfortunately proprietary hardware will always be a fertile ground for proprietary software and lock-in mechanisms that choke creativity and progress. >> As an end user I'd say that a pay-by-the-hour is the best solution, but >> what is the tools vendors gain with this model? > > Potentially, a great deal more users than before is my claim--the > so-called "long tail". They can still charge high hourly prices, and > have been doing so for the large customers. Although tool purchase is > far from the major cost component of making a chip, it does affect the > entire workflow. Furthermore, the population of dabblers, hobbyists, > students who will eventually replenish and sustain the industry will > benefit from pay-as-you-use models and portability. I'm not saying > anything new here, am I? ; ) Dabblers, hobbysts, students will not throw a dime in this business. There exists already enough 'free and open' tools that can let people play around enough to get their projetc/toy/hobby done, and EDAs are providing free of charge licenses to let them play. I've always been curious to understand what are the differences between the software community and the hardware one, and why we haven't yet gone through the same need to share, contribute and collaborate the same way the software guys do. In the end my conclusion is that you are always stuck with a proprietary tool, even if you can download it free of charge and use it without paying. [] >> Problem with encrypting storage is where the decryption key is stored. >> Can you imagine how catastrophic would be a scenario where you *lost* >> the decryption key or it has been forged somehow? >> >> And regarding secure transmission I hope you guys are aware of this: >> https://www.openssl.org/news/secadv_20140407.txt > > You're describing the kind of technical problem for which I think > there is a infinite loop of questions and answers. For example, to get > a decryption key, one would have to break into the system first. To > break into a system, one would have to first find the system. Security > vulnerabilities are present in company networks as well as in Amazon. To get a decryption key you do not need to get in, it is sufficient that somebody 'gets out' with it. It is known since long time that secrecy does not guarantee security and here is an excerpt from the pgp FAQ: Q: Can the NSA crack PGP (or RSA, DSS, IDEA, 3DES,...)? A: This question has been asked many times. If the NSA were able to crack RSA or any of the other well known cryptographic algorithms, you would probably never hear about it from them. Now that RSA and the other algorithms are very widely used, it would be a very closely guarded secret. The best defense against this is the fact the algorithms are known worldwide. There are many competent mathematicians and cryptographers outside the NSA and there is much research being done in the field right now. If any of them were to discover a hole in one of the algorithms, I'm sure that we would hear about it from them via a paper in one of the cryptography conferences. > [] Doubters will always > doubt--there will be people who will never contemplate using a cloud > service for chip design, but I think with the proper communication and > technology, more and more companies will turn to a "cloud" way of > designing. This shift may only be driven if big EDAs are willing to let it happen. That means users *and* new enterpreneurs are tied to a business model led by somebody else. Compilers vendors back in the late 80s where facing a dramatic shift because a competitive and much more powerful tool was distributed free of charge and free of lock-ins (gcc) and there were companies making a business out of it (Cygnus). Nowadays there cannot be a 'one man show' like those days and I agree that is unlikely that someone will start to write an open and free synthesis tool, but without it our community is bound to the ups and downs of those big guys who are leading the show. >> Saying you have put a lot of effort is *all relative*. No matter how >> skilled are your network experts and software developers, only a large >> user base and a large amount of 'eyes' can guarantee a level of trust >> sufficient to migrate on the cloud. > > True; it's a chicken-and-egg situation which we can address only by > working with customers. If you limit your verification to your customers than you're likely going to fail. There are tons of incredibly gifted geeks out there and if you do not open your systems to peer review, than you will never leverage that power. Your code will always be *yours* because you know every bit of it, not because you put a patent on it or made it close to the world. AlArticle: 156512
On 14/04/14 08:48, alb wrote: > I've always been curious to understand what are the differences between > the software community and the hardware one, and why we haven't yet gone > through the same need to share, contribute and collaborate the same way > the software guys do. Because it is (1) possible and (2) necessary. 1) in some cases it is easy to have a dongle without obnoxious DRM: the hardware itself 2) inside the hardware there is a lot of highly commercially sensitive information that does not need to be (and is not) visible to the customer. 3) the internal structures and performance are imperfectly understood and modelled. That's a significant problem for the manufacturer, infeasible if you are trying to keep third parties aligned. 4) the up-front cost-of-entry (non-recurring engineering) charges are prohibitive. Starting small in a garage and then having incremental improvements is no longer an option.Article: 156513
Hi Tom, Tom Gardner <spamjunk@blueyonder.co.uk> wrote: >> I've always been curious to understand what are the differences between >> the software community and the hardware one, and why we haven't yet gone >> through the same need to share, contribute and collaborate the same way >> the software guys do. > > Because it is (1) possible and (2) necessary. I'm confused. Are you saying that 'share, contribute and collaborate is possible and necessary'? Then what are you trying to say with your following points? > 1) in some cases it is easy to have a dongle without > obnoxious DRM: the hardware itself I don't get this point. > > 2) inside the hardware there is a lot of highly commercially > sensitive information that does not need to be (and is not) > visible to the customer. There's a lot of highly commercially sensitive information in your latest i7 or whatever is the processor you are mounting on your pc, but it does not prevent you from running a completely free tool to build software on it. > 3) the internal structures and performance are imperfectly > understood and modelled. That's a significant problem for > the manufacturer, infeasible if you are trying to keep > third parties aligned. sorry but I fail to understand this point as well. > 4) the up-front cost-of-entry (non-recurring engineering) > charges are prohibitive. Starting small in a garage and > then having incremental improvements is no longer an option. I'm certainly not dreaming about that. But I believe there are enough resources and capabilities, if joint together, that can make a big change.Article: 156514
On 14/04/14 10:22, alb wrote: > Hi Tom, > Tom Gardner <spamjunk@blueyonder.co.uk> wrote: >>> I've always been curious to understand what are the differences between >>> the software community and the hardware one, and why we haven't yet gone >>> through the same need to share, contribute and collaborate the same way >>> the software guys do. >> >> Because it is (1) possible and (2) necessary. > > I'm confused. Are you saying that 'share, contribute and collaborate is > possible and necessary'? > Then what are you trying to say with your following points? Sorry! In the hardware world it is possible and necessary to avoid collaborating and sharing w.r.t. the internals of many devices, particularly with FPGAs >> 1) in some cases it is easy to have a dongle without >> obnoxious DRM: the hardware itself > > I don't get this point. That's the "it is possible to avoid..." part. >> 2) inside the hardware there is a lot of highly commercially >> sensitive information that does not need to be (and is not) >> visible to the customer. > > There's a lot of highly commercially sensitive information in your > latest i7 or whatever is the processor you are mounting on your pc, but > it does not prevent you from running a completely free tool to build > software on it. They have an extremely comprehensive specification of what you can rely on seen from the outside. You can't modify anything on the inside, for many good reasons. >> 3) the internal structures and performance are imperfectly >> understood and modelled. That's a significant problem for >> the manufacturer, infeasible if you are trying to keep >> third parties aligned. > > sorry but I fail to understand this point as well. That point is clear, provided that you have some concept of semiconductor modelling. >> 4) the up-front cost-of-entry (non-recurring engineering) >> charges are prohibitive. Starting small in a garage and >> then having incremental improvements is no longer an option. > > I'm certainly not dreaming about that. But I believe there are enough > resources and capabilities, if joint together, that can make a big > change. You need detailed plans as well as dreams. The devil is, as usual, in the details.Article: 156515
Ok I can even add Synplify Pro to the list of tools which is running on a single core as well. Is this a common feature I did not happen to know? (I start to believe so). Al alb <al.basili@gmail.com> wrote: > Hi everyone, > > I had my vbox running with one core only (out of 4 on the host) and > Designer was running extremely slow, so I thought 'what the heck' I'm > not doing much with the other 3 cores on my host, so maybe I'll throw > another core to my vbox so that I can run Designer faster. > > What a wishful thinking... designer_bin goes up to 50% only of my cpu > resources, clearly hinting that it does not know how to run on a > multicore system. Is this possible? > > Is there anyway to bypass this problem? > > Thanks a lot, > > Al > -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 156516
alb <al.basili@gmail.com> wrote: > Ok I can even add Synplify Pro to the list of tools which is running on > a single core as well. > > Is this a common feature I did not happen to know? (I start to believe > so). Pretty much. Quartus has had the ability to use multiple cores for years, but until 13.x the average number of cores used was about 1.7. 13.x is a lot better, but still it's primarily the fitter that may use of 4 or so cores (on a 16 physical/32 hyperthread machine). Once this happens Amdahl's law starts to bite: the parallel stage happens quick(er), but the rest of the synthesis is now the bottleneck. TheoArticle: 156517
Hi everyone, I started adding false paths to my design (see this thread for a context: <boojj9FgihhU1@mid.individual.net>) but in order to avoid funny names, I started to add wildcards. My false path constraints look like this: set_false_path -from [get_cells execute0.r.ctrl_wrb.reg_d*] \ -to [get_cells execute0.fpu0.REGISTER_o*] And I have ~50 of them. Now, the wildcard is expanded and I end up with ~60000 false paths that apparently are a too huge amount for my par tool (Designer), which takes 40 minutes *only* to import the file and 7h to perform place and route. I feel there's something wrong here... Any suggestions? The sad part is that it also fail to meet timing constraints, even though the synthesis looked like it had sufficient slack margin to handle routing delays. Thanks in advance, Al -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 156518
Hi Tom, Tom Gardner <spamjunk@blueyonder.co.uk> wrote: [] >> I'm confused. Are you saying that 'share, contribute and collaborate is >> possible and necessary'? >> Then what are you trying to say with your following points? > > Sorry! In the hardware world it is possible and necessary > to avoid collaborating and sharing w.r.t. the internals of > many devices, particularly with FPGAs I'm not advocating to share your crown jewels, I'm only advocating for standards which allows X and Y to build their FPGA while W and Z build the tool freely. In order to produce perfectly suitable machine code I do not need to know how the clock is routed in your cpu, neither how you optimized the carry chain, I simply need to know which are the opcodes and the registers it uses. If you want to make a very rough parallel with the FPGA world, I do not need to know more than the logic provided by a logical cell (which is documented on the datasheet) and the associated delays, which is also reported on the datasheet. [1] A tool that is free as in *libre*, can be a turning point in opening up our community to a much more effective exchange, with a set of tools which are constantly growing and increasing design efficiency. [] >>> 2) inside the hardware there is a lot of highly commercially >>> sensitive information that does not need to be (and is not) >>> visible to the customer. >> >> There's a lot of highly commercially sensitive information in your >> latest i7 or whatever is the processor you are mounting on your pc, but >> it does not prevent you from running a completely free tool to build >> software on it. > > They have an extremely comprehensive specification > of what you can rely on seen from the outside. You > can't modify anything on the inside, for many good > reasons. and what does prevent us from having a similar /comprehensive specification/ for fpga? >>> 3) the internal structures and performance are imperfectly >>> understood and modelled. That's a significant problem for >>> the manufacturer, infeasible if you are trying to keep >>> third parties aligned. >> >> sorry but I fail to understand this point as well. > > That point is clear, provided that you have some > concept of semiconductor modelling. Uhm, I thought I was lacking some *basic* information about /semiconductor modelling/ therefore I googled it and Wikipedia reports: >Semiconductor device modeling creates models for the behavior of the >electrical devices based on fundamental physics, such as the doping >profiles of the devices. Internal structure are difficult to model, so what? Do you think my synthesis tool need to know the physics of my device? Or can it simply rely on 'inaccurate but good enough models' of logic elements and delays? What are the 'third parties' that need to be 'aligned' (to what then?). >>> 4) the up-front cost-of-entry (non-recurring engineering) >>> charges are prohibitive. Starting small in a garage and >>> then having incremental improvements is no longer an option. >> >> I'm certainly not dreaming about that. But I believe there are enough >> resources and capabilities, if joint together, that can make a big >> change. > > You need detailed plans as well as dreams. The devil > is, as usual, in the details. You do not need a detailed plan, you need motivated and talented people. Often you either lack the motivation or the skills. Al [1] I may not have all the insight of what does it take to create a synthesis tool and I'd be happy to hear it.Article: 156519
I'm an FPGA newbie, working with the freeware Altera Quartus II IDE. I used the megafunction builder to create a FIFO memory, the .v file it generated is similar to the virtual prototypes created for COM interfaces, with a data structure and no functions. Is this all that the megafunction builder provides, and I need to write my own Verilog code for eg. bumping the address registers and generating handshake signals? Or is that code generated for me already in a file that I haven't found yet? TIAArticle: 156520
On Monday, April 14, 2014 9:17:25 AM UTC-4, Bruce Varley wrote: > I'm an FPGA newbie, working with the freeware Altera Quartus II IDE. I used the > megafunction builder to create a FIFO memory, the .v file it generated is > similar to the virtual prototypes created for COM interfaces, with a data > structure and no functions. Is this all that the megafunction builder provides, > and I need to write my own Verilog code for eg. bumping the address registers > and generating handshake signals? Or is that code generated for me already in a > file that I haven't found yet? TIA Quartus has a megafunction for a FIFO, it is under 'Memory Compiler' that has a typical FIFO interface set of signals. Which megafunction part are you using? Kevin JenningsArticle: 156521
On 14/04/14 13:38, alb wrote: > Hi Tom, > Tom Gardner <spamjunk@blueyonder.co.uk> wrote: > [] >>> I'm confused. Are you saying that 'share, contribute and collaborate is >>> possible and necessary'? >>> Then what are you trying to say with your following points? >> >> Sorry! In the hardware world it is possible and necessary >> to avoid collaborating and sharing w.r.t. the internals of >> many devices, particularly with FPGAs > > I'm not advocating to share your crown jewels, I'm only advocating for > standards which allows X and Y to build their FPGA while W and Z build > the tool freely. In order to produce perfectly suitable machine code I > do not need to know how the clock is routed in your cpu, neither how you > optimized the carry chain, I simply need to know which are the opcodes > and the registers it uses. > > If you want to make a very rough parallel with the FPGA world, I do not > need to know more than the logic provided by a logical cell (which is > documented on the datasheet) and the associated delays, which is also > reported on the datasheet. [1] You almost certainly do need to know more, unless you are just making a toy tool or one vastly simplified so that it is best suited for educational purposes. Start by considering wire delays, routing constraints, simultaneous switching transients, and many more non-ideal aspects of operation that need to be considered in modern FPGAs. Look at a vendor's tool to see the type of constraints that have to be specified by the user in order to have a reproducible implementation. Use a vendor's tool to see the type of warnings that the tool emits. Have a look at the Xilinx documentation for just one of its families. Download their "Documentation Navigator-2013.4 Utilities" for their Vivado tool. > A tool that is free as in *libre*, can be a turning point in opening up > our community to a much more effective exchange, with a set of tools > which are constantly growing and increasing design efficiency. Go ahead and do it. It would help if you have already completed several designs, since then you will know the pain points that designers would like removed. There's little point in inventing yet another way of doing something that is already easy to do. >>>> 2) inside the hardware there is a lot of highly commercially >>>> sensitive information that does not need to be (and is not) >>>> visible to the customer. >>> >>> There's a lot of highly commercially sensitive information in your >>> latest i7 or whatever is the processor you are mounting on your pc, but >>> it does not prevent you from running a completely free tool to build >>> software on it. >> >> They have an extremely comprehensive specification >> of what you can rely on seen from the outside. You >> can't modify anything on the inside, for many good >> reasons. > > and what does prevent us from having a similar /comprehensive > specification/ for fpga? See my point 2. If you don't understand that then it indicates you haven't implemented a design for a modern FPGA. Browse the documents in Xilinx's "Documentation Navigator-2013.4 Utilities"; it might enlighten you. >>>> 3) the internal structures and performance are imperfectly >>>> understood and modelled. That's a significant problem for >>>> the manufacturer, infeasible if you are trying to keep >>>> third parties aligned. >>> >>> sorry but I fail to understand this point as well. >> >> That point is clear, provided that you have some >> concept of semiconductor modelling. > > Uhm, I thought I was lacking some *basic* information about > /semiconductor modelling/ therefore I googled it and Wikipedia reports: > >> Semiconductor device modeling creates models for the behavior of the >> electrical devices based on fundamental physics, such as the doping >> profiles of the devices. > > Internal structure are difficult to model, so what? Do you think my > synthesis tool need to know the physics of my device? It needs to model them in sufficient detail to be able to get good reliable predictions and implementations. > Or can it simply > rely on 'inaccurate but good enough models' of logic elements and > delays? Not sufficient. > What are the 'third parties' that need to be 'aligned' (to what then?). People like you, to the secret internal non-idealities. >>>> 4) the up-front cost-of-entry (non-recurring engineering) >>>> charges are prohibitive. Starting small in a garage and >>>> then having incremental improvements is no longer an option. >>> >>> I'm certainly not dreaming about that. But I believe there are enough >>> resources and capabilities, if joint together, that can make a big >>> change. >> >> You need detailed plans as well as dreams. The devil >> is, as usual, in the details. > > You do not need a detailed plan, you need motivated and talented people. > Often you either lack the motivation or the skills. Such people are necessary but not sufficient. If you don't have a plan then , by definition, you don't know what you are going to do - let alone how to do it.Article: 156522
On Monday, April 14, 2014 3:26:29 PM UTC+3, Theo Markettos wrote: > alb <al.basili@gmail.com> wrote: > > > Ok I can even add Synplify Pro to the list of tools which is running on > > a single core as well. > > > > Is this a common feature I did not happen to know? (I start to believe > > so). > > Pretty much. Quartus has had the ability to use multiple cores for years, > but until 13.x the average number of cores used was about 1.7. 13.x is a > lot better, but still it's primarily the fitter that may use of 4 or so > cores (on a 16 physical/32 hyperthread machine). > I didn't measure exactly, but my impression was that Quartus 13.1 fitter running on 4 cores gets the job done approximately 1.5 faster than when running on a single core. Nice speed-up, but hardly a game changer. May be that's because the machine in question is rather old Nehalem-based i7-920. Relatively to newer Intel cores, Nehalem is severely constrained by L3 cache bandwidth. > Once this happens Amdahl's law starts to bite: the parallel stage happens > quick(er), but the rest of the synthesis is now the bottleneck. > > Theo I am especially annoyed by slowness of Altera assembler and Timequest timing analyzer. Because this comparatively (to fitter) simple tools do not have to be slow.Article: 156523
On 4/14/2014 5:38 AM, alb wrote: > Hi everyone, > > I started adding false paths to my design (see this thread for a > context: <boojj9FgihhU1@mid.individual.net>) but in order to avoid funny > names, I started to add wildcards. > > My false path constraints look like this: > > set_false_path -from [get_cells execute0.r.ctrl_wrb.reg_d*] \ > -to [get_cells execute0.fpu0.REGISTER_o*] > > And I have ~50 of them. Now, the wildcard is expanded and I end up with > ~60000 false paths that apparently are a too huge amount for my par tool > (Designer), which takes 40 minutes *only* to import the file and 7h to > perform place and route. > > I feel there's something wrong here... Any suggestions? The sad part is > that it also fail to meet timing constraints, even though the synthesis > looked like it had sufficient slack margin to handle routing delays. > > Thanks in advance, > > Al > I think you need to do your false path declaration from clock domain to clock domain, not register to register. BobHArticle: 156524
On Monday, April 14, 2014 8:48:15 PM UTC-4, BobH wrote: >=20 >=20 > I think you need to do your false path declaration from clock domain to= =20 >=20 > clock domain, not register to register. >=20 Don't think so, the paths are between registers. Saying that every transfe= r between clock domains is overly pessimistic (based on the OP's posting in= the other thread that he referenced). Just in general, the bad thing about marking clock domain to clock domain a= s being a false path rather than individual (or wild carded) paths is that = there no check that you don't incorrectly insert such a crossing. If you d= eclare all clock domain crossings to be false, you have nothing in the timi= ng analyzer to check that you haven't overlooked something. If you do decl= are them by path, then at least you would have had to look at the path at s= ome time and convince yourself that the path is not valid. It's not foolpr= oof but it's better than nothing...unless of course it means that now the d= esign won't build. Kevin Jennings
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z