Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> > Xilinx FPGA's are nice but all of them (after XC4K?) do not have any more > access to the OnChipOscillator - it is not usually required also, but in > some rare cases it may be useful to have some OnChip Clock available in case > all external clock sources fail, or do have emergency Watchdog timer to > monitor some events also in the case of external clock circuitry failures. > For this purpose we are developing OnChip Oscillator IP Cores. > > http://gforge.openchip.org/frs/?group_id=32 > Just looked at the sources - There are binary VHDL files. What does this mean? MartinArticle: 87526
Just saw that Lattice announced a free open source 8 bit micro on their site. http://www.latticesemi.com/products/devtools/ip/refdesigns/mico8.cfm Any code wizards out there who can write a C compilier for this mico8? This open core would sure make life easier than dealing with the lawyer documents needed for an IP core. TeoArticle: 87527
Is there any contrainsts for the V2P to disallow routing in a certain area?Article: 87528
And while you are at it, Make a c compiler for pico-Blaze, too. AustinArticle: 87529
Amr, As Peter says, there is no "wear out" mechanism at work, so we are talking about a latent defect that finally results in a junction failure, or a gate rupture, or a mechanical failure of the connections (solder bump cracking, package via opening, printed circuit board wiring breaking). The HTOL data is available from our Reliability group through our FAEs to users who wish to examine it (under NDA). Basically here you will see the equations, the tests at elevated temperatures, and any failures that have occured, and what and where they were (and what changes were made to improve the product as a result). For example, I have ~ 6,000 devices running 24X7 for the Rosetta experiments, and I have not had a single failure in ~ 300 gigabit-years (take the number of bits in a device, and multiply by years, and divide by 1E9 to get Gb-yrs). So normal operation will never predict a FIT rate (just takes to long to break). Another way of looking at this, is that those 6,000 devices have run for an average of two years. That makes it 12,000 device years. Or if one failed right now, that would be 9.5 FIT. So, under normal conditions, the failure rate is much less than 9.5 FIT. To get a statistically significant number, the HTOL conditions place the device under conditions it should never see. This totally arbritray test method is the industry standard, and doesn't predict the real failure rate well at all. It will indicate if a part is in trouble, however, as a part will fail the HTOL qualification miserably if there is a weakness. AustinArticle: 87530
"Martin Schoeberl" <mschoebe@mail.tuwien.ac.at> schrieb im Newsbeitrag news:42e540a2$0$8024$3b214f66@tunews.univie.ac.at... > > > > Xilinx FPGA's are nice but all of them (after XC4K?) do not have any more > > access to the OnChipOscillator - it is not usually required also, but in > > some rare cases it may be useful to have some OnChip Clock available in case > > all external clock sources fail, or do have emergency Watchdog timer to > > monitor some events also in the case of external clock circuitry failures. > > For this purpose we are developing OnChip Oscillator IP Cores. > > > > http://gforge.openchip.org/frs/?group_id=32 > > > Just looked at the sources - There are binary VHDL files. What does > this mean? > > Martin > This means they can only be used by ISE tools. There should be no problems using those files with ISE 6.2 to 7.1, just add to your project and synthesise as normal. If there are any problems let me know. AnttiArticle: 87531
austin wrote: > And while you are at it, > > Make a c compiler for pico-Blaze, too. > > Austin It's been done... By the way I was surprised that Lattice is really offering "open" source and not a free license to use on Lattice parts only.Article: 87532
"Jerry Avins" <jya@ieee.org> wrote in message news:Moqdnb_Afo1SZXnfRVn-2Q@rcn.net... > Fred Marshall wrote: >> "Jerry Avins" <jya@ieee.org> wrote in message >> news:2qqdnZsKM7QXUn7fRVn-tg@rcn.net... >> > > "Freeze" is a bad word in the sewerage business. Requirements change as > effluent limits are changed by government and communities grow. > Jerry > -- Well, it would be in a lot of businesses. The context of freezing, in my mind, is short time frame vs. medium or longer time frame. You adopt the concept of freezing in order to keep things moving in the short term. You adapt as you must. If you must adapt in the short term then so be it. However, that is "requirements pull" vs. "technology push". Requirements pull probably always trumps technology push. I well understand your comment: We built a new wastewater plant (2 batch reactors) for our City. It was a 100% replacement. It was designed to have adequate capacity margin. It was deemed "topped out" by the Dept. of Ecology (DOE) when it came on line - with virtually no new hookups. Why? Because the engineers designed it under the assumption that inflow and infiltration (I&I) would be drastically reduced as part of a parallel project. The City is over 100 years old and some of the pipes are quite old and there is undoubtedly stormwater intentionally (but undocumented / unknown) piped into the sanitary sewer from long ago. Nobody in their right mind would predict a large % decrease in I&I by virtue of a single fixit project in a City of this type. But they did. So, we built a 3rd reactor recently. As we improve I&I, the DOE allows more capacity but the rate of improvement is slow so we had to do this. Did the requirements change? No; not during construction because the "requirements" were those assumed by the engineers. And yes; for the better but not as fast as expected and planned for. And no; because the engineers requirements didn't line up with the performance used by the DOH. So, the requirements were faulty and a change in a moderate time frame was necessary. Actually, other changes were done on the first two batch reactors to make them more productive - but, again, it was post-construction because that's when the idea struck. Actually, I think the latter change was a matter of requirements pull and realization that there was better technology available. You freeze in order to keep out the inadvisable changes that would come if a variety of personalities had free rein. So rather than diminishing the value of the idea outright, it's best to consider that there might be good purpose and application of the idea. Here's a context for it: You have a bunch of folks designing something and some of them are really bright but aren't very "common sense". They keep changing things. The project stretches because of the spinoff affects of the changes. Little is gained by the stretch - in fact, it costs money. So, you create some hurdles for the changes to cross; you set some (perhaps arbitrary) time points where there will be a "freeze" - which is a hurdle. First changes are easy to make. Then they are made a little harder to make. Then they are made pretty hard to make. Much can have to do with how "integrated" the thing is that's being changed or how far into production it is or how many already exist in the field that have to be maintained, etc. etc. FredArticle: 87533
Allright, My question is not about choosing Xilinx or Altera. From what I have read online, it seems that the choice mostly depends on personal experiences with the respective companies. My question deals with soft-cores. If I write a soft-core in VHDL, is it possible to implement it on a Xilinx, Altera, Actel or other FPGA brands using their respective synthesizer software? To me VHDL is VHDL, so I do not understand why soft-cores written in VHDL like Picoblaze,Microblaze would be FPGA company specific? What is the basic architectural differences of FPGAs which would prevent implementaion of soft-core (maybe like Microblaze) on Altera FPGAs? Is comparing Xilinx FPGAs and Altera FPGAs really like comparing apples and oranges? And then what about the newer FPGAs from Atmel(FPSLIC) or Actel etc. Are there any publication which benchmark the different FPGAs and their technologies? Is there a way to design soft-cores in VHDL so that they can be implemented on any reconfigurable platforms, like Xilinx FPGA, Atmel FPGA(FPSLIC) or Alterra FPGAs? At this stage I just experimenting with different soft-cores for my thesis project, and I am just curious about all the excitement caused by the marketing blitz caused by newer FPGA products like FPSLIC, or Actel's FPGA etc. My perspective on this topic is educational and research based, so I would appreciate any help in that context. Thanks in advance for all your suggestions. -Yaju N Electrical and Computer Engineering Brigham Young University, Provo UT y a j u a t b y u . eduArticle: 87534
"Gabor" <gabor@alacron.com> schrieb im Newsbeitrag news:1122324344.127535.218330@g44g2000cwa.googlegroups.com... > > > austin wrote: > > And while you are at it, > > > > Make a c compiler for pico-Blaze, too. > > > > Austin > > It's been done... > > By the way I was surprised that Lattice is really offering > "open" source and not a free license to use on Lattice > parts only. > this was VERY clever from them. You understand it if you think about it. As they are releasing the sources they can actually prevent the use of it, so it makes more business for them to allow the use in non-Lattice silicon. AnttiArticle: 87535
"scottfrye" <scottf3095@aol.com> wrote in message news:1122308450.491025.165850@g14g2000cwa.googlegroups.com... > >... For some >>reason, managers and customers seem to think that software can be >>changed much more easily than mechanical systems... so they do. > > Do you think that software is as hard to change as mechanical systems? > > or is it possible that, even though software is easier to change than a > mechnical system, it still requires some work and many > managers/customers assume easier change = free change? > I depends. It's pretty easy to change the diameter of a hole in AutoCAD. So, if that's the context of a change to a mechanical system then it's just as easy as changing a line of code. In fact, it's equivalent to changing the value of a constant and that's how it's done. Now, if the next step is to reprogram an NC machine, then that's something additional. But isn't that similar to linking the new compiled code...? Maybe. FredArticle: 87536
I know the device is not recomended for new designs, i use it for prototyping, final product will be based on ASIC with ARM core inside. i dont know of any other protoyping solution that can offer 200MHZ ARM9 processor + fpga. ArieArticle: 87537
Yaju Nagaonkar wrote: > My question deals with soft-cores. If I write a soft-core in VHDL, is > it possible to implement it on a Xilinx, Altera, Actel or other FPGA > brands using their respective synthesizer software? > To me VHDL is VHDL, so I do not understand why soft-cores written in > VHDL like Picoblaze,Microblaze would be FPGA company specific? A vendor core is a netlist of vendor-specific primitives. It is synthesis output. The VHDL/Verilog source code is the synthesis input. Source code costs extra. If you write synchronous processes and use the standard block-ram templates your source code is portable. If you instance vendor specific-primitives, your source code will only work with one vendor. > I am just curious about all the excitement caused > by the marketing blitz caused by newer FPGA products like FPSLIC, or > Actel's FPGA etc. I guess I missed the excitement. -- Mike TreselerArticle: 87538
Hi Martin, I figured out the problem. The driver that Xilinx releases isn't the best, and very fragile. It compiled today with no issues, but yesterday it didn't. The windrvr.o didn't want to build due to an error with rights of the files. I just chmod 777 the whole directory and re-made it and it works.Article: 87539
"Mike Treseler" <mike_treseler@comcast.net> schrieb im Newsbeitrag news:3kl54rFubnhkU1@individual.net... > Yaju Nagaonkar wrote: > > > My question deals with soft-cores. If I write a soft-core in VHDL, is > > it possible to implement it on a Xilinx, Altera, Actel or other FPGA > > brands using their respective synthesizer software? > > To me VHDL is VHDL, so I do not understand why soft-cores written in > > VHDL like Picoblaze,Microblaze would be FPGA company specific? > > A vendor core is a netlist of vendor-specific primitives. > It is synthesis output. > The VHDL/Verilog source code is the synthesis input. > Source code costs extra. > > If you write synchronous processes and use the standard > block-ram templates your source code is portable. > If you instance vendor specific-primitives, your > source code will only work with one vendor. > > > I am just curious about all the excitement caused > > by the marketing blitz caused by newer FPGA products like FPSLIC, or > > Actel's FPGA etc. > > I guess I missed the excitement. > > -- Mike Treseler newer FPGA as FPSLIC ???? the FPGA in FPSLIC is VERY OLD AT40K nothing new about it. (well the FPSLIC-II is coming, but I guess there is nothing new in it either) AnttiArticle: 87540
"arie" <aries@wisair.com> schrieb im Newsbeitrag news:1122326731.202651.187530@g44g2000cwa.googlegroups.com... > > I know the device is not recomended for new designs, i use it for > prototyping, > final product will be based on ASIC with ARM core inside. i dont know > of any other protoyping solution that can offer 200MHZ ARM9 processor + > fpga. > > Arie > try STW22000 from ST and forget Altera ARM926 300MHz + FPGA AnttiArticle: 87541
I guess I have lot to catch up on in the FPGA technolgy area.I just learnt about FPSLIC, as Atmel has been advertising it as a better FPGA choice in recent magazines. Although I have not used FPSLIC, I am curious whether, it is easy to port VHDL for Xilinx (used at my univ.) to FPSLIC? Also in term of soft-core processors, are there FPGA-portable open source soft-core processors to download to FPGAs? If there are, wouldnt those be a good choice for benchmark studies with different FPGAs? Are most of the open-cores (example: opencores.org) designed specifically for fpga brand (xilnx/alterra)? Thank you for your answers and helping me learn more about FPGA technolgies. -YajuArticle: 87542
"Yaju Nagaonkar" <yaj_n@hotmail.com> schrieb im Newsbeitrag news:1122330050.620218.117110@g47g2000cwa.googlegroups.com... > I guess I have lot to catch up on in the FPGA technolgy area.I just > learnt about FPSLIC, as Atmel has been advertising it as a better FPGA > choice in recent magazines. > > Although I have not used FPSLIC, I am curious whether, it is easy to > port VHDL for Xilinx (used at my univ.) to FPSLIC? > > Also in term of soft-core processors, are there FPGA-portable open > source soft-core processors to download to FPGAs? If there are, wouldnt > those be a good choice for benchmark studies with different FPGAs? > > Are most of the open-cores (example: opencores.org) designed > specifically for fpga brand (xilnx/alterra)? > > Thank you for your answers and helping me learn more about FPGA > technolgies. > > -Yaju > most open cores are not FPGA dependant or can be modified easily to work for different FPGAs. as of Atmel FPSLIC - this is the almost only FPGA where the FPGA vendor does not provide free software. and the software doesnt work also, almost impossible to use. AnttiArticle: 87543
Has anyone seen any timing specs for the V4 local/regional clocks? I don't see anything in the data sheets. I'm interested in the clock to Q output delay from a local clock pad to an output pad, obviously without using a DCM. I'll probably be using the output SERDES capabilities of the IOB as well. Thanks! John ProvidenzaArticle: 87544
"austin" <austin@xilinx.com> schrieb im Newsbeitrag news:1122322992.158655.308900@g43g2000cwa.googlegroups.com... > And while you are at it, > > Make a c compiler for pico-Blaze, too. > > Austin > well I made a Xilinx version of it ;) project files are at OpenForge http://gforge.openchip.org/projects/mico8/ the syntesis report is at and of message, 0% utilization is nice to see feature !!! does it mean I can use infinite number of Mico8 in S3-1500? ok joking ;) Antti report for Mico8 (Xilinx Version) Device utilization summary: --------------------------- Selected Device : 3s1500fg676-5 Number of Slices: 130 out of 13312 0% Number of Slice Flip Flops: 71 out of 26624 0% Number of 4 input LUTs: 199 out of 26624 0% Number of bonded IOBs: 30 out of 487 6% Number of BRAMs: 1 out of 32 3% Number of GCLKs: 1 out of 8 12% ========================================================================= TIMING REPORT Clock Information: ------------------ -----------------------------------+------------------------+-------+ Clock Signal | Clock buffer(FF name) | Load | -----------------------------------+------------------------+-------+ clk | BUFGP | 84 | -----------------------------------+------------------------+-------+ Timing Summary: --------------- Speed Grade: -5 Minimum period: 7.568ns (Maximum Frequency: 132.130MHz) Minimum input arrival time before clock: 2.571ns Maximum output required time after clock: 6.306ns Maximum combinational path delay: No path foundArticle: 87545
"Peter K." <p.kootsookos@iolfree.ie> writes: > Randy Yates wrote: > >> 6. Be patient. >> >> Waiting for a system to be properly designed will almost always be >> more efficient (time-wise) than "waterfall development," i.e., the >> type of development for impatient managers that want to see something >> almost immediately. > > I disagree. A better model (than either big up-front design or > waterfall), in software development, is the spiral / agile model of > development. I've found this model to be excellent, especially when > the requirements change or are not fully elicited --- as they usually > are in the sorts of software I've developed (greenfield ones). I'm not familiar with the spiral/agile model, but you'll have to do some pretty fancy talking to convince me to agree that something other than a strict development cycle flow is going to produce a better system in the long run. The problem I've seen is that, in the process of stumbling over one's feet to get _something_ out that works, that the managers can touchie-feelie-see, one ends up skimping or dumbing down the high level design. Then, when that happens, the remainder of the project's development is crippled, with the end result either that it took MORE time than it would have taken had a proper high level design been performed, or that the performance/maintainability/extendability of the system is greatly comprimised. > Big up-front design can be a killer when requirements change. For some > reason, managers and customers seem to think that software can be > changed much more easily than mechanical systems... so they do. The > trick is to figure out when they're really expressing a new > requirement, or just jumping on the latest Big Thing. You can't have your cake and eat it too. Either take a hit sticking to your requirements, or dumb-down your system. -- % Randy Yates % "I met someone who looks alot like you, %% Fuquay-Varina, NC % she does the things you do, %%% 919-577-9882 % but she is an IBM." %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://home.earthlink.net/~yatescrArticle: 87546
Antti Lukats wrote: > "Martin Schoeberl" <mschoebe@mail.tuwien.ac.at> schrieb im Newsbeitrag > news:42e540a2$0$8024$3b214f66@tunews.univie.ac.at... > >>>Xilinx FPGA's are nice but all of them (after XC4K?) do not have any > more >>>access to the OnChipOscillator - it is not usually required also, but in >>>some rare cases it may be useful to have some OnChip Clock available in > case >>>all external clock sources fail, or do have emergency Watchdog timer to >>>monitor some events also in the case of external clock circuitry > failures. >>>For this purpose we are developing OnChip Oscillator IP Cores. >>> >>>http://gforge.openchip.org/frs/?group_id=32 Interesting - any data on Vcc and temp variations, and on other frequencies ? Power consumption ? Seems to me, it would be better to create a clock as low as possible, before driving the high-load clock buffers. - thus a cell that is both OSC and Divider could be better ? An advantage of on chip Osc, is they self-margin, so track Vcc and Temp. If I've understood your results, they show quite close correlation across the die. This would also be a good way to see if faster speed grades REALLY are faster, or just stamped to match the market :) >>> >> >>Just looked at the sources - There are binary VHDL files. What does >>this mean? >> >>Martin >> > > This means they can only be used by ISE tools. There should be no problems > using those files with ISE 6.2 to 7.1, just add to your project and > synthesise as normal. If there are any problems let me know. ?! - do you mean you cannot save as ASCII source code, or use any other editor ? Surely this nonsense can be disabled ? -jgArticle: 87547
Hi Austin, If I understand you correctly, the FIT numbers given in reliability reports are not representative of actually anything useful. It is just provided for complying with the industry standard. One more thing, according to the Arrhenius relatonship and the way the thermal acceleraion factor is calculated, there is only one stress junction temperature for testing the device. In other words, the devices are exposed to constant stress loading as opposed to step or random stress. is that right? One last issue: Is the Voltage accelaration factor used in calculating the FIT numbers given in the Xilinx reliability report? or is it just the thermal acceleration factor? I'm not sure you are allowed to answer this last question :) Thanks, AmrArticle: 87548
Fred Marshall wrote: > "Jerry Avins" <jya@ieee.org> wrote in message > news:Moqdnb_Afo1SZXnfRVn-2Q@rcn.net... > >>Fred Marshall wrote: >> >>>"Jerry Avins" <jya@ieee.org> wrote in message >>>news:2qqdnZsKM7QXUn7fRVn-tg@rcn.net... >>> >> >>"Freeze" is a bad word in the sewerage business. Requirements change as >>effluent limits are changed by government and communities grow. > Jerry >>-- > > > Well, it would be in a lot of businesses. The context of freezing, in my > mind, is short time frame vs. medium or longer time frame. You adopt the > concept of freezing in order to keep things moving in the short term. You > adapt as you must. If you must adapt in the short term then so be it. > However, that is "requirements pull" vs. "technology push". Requirements > pull probably always trumps technology push. > > I well understand your comment: > We built a new wastewater plant (2 batch reactors) for our City. It was a > 100% replacement. It was designed to have adequate capacity margin. It was > deemed "topped out" by the Dept. of Ecology (DOE) when it came on line - > with virtually no new hookups. Why? Because the engineers designed it > under the assumption that inflow and infiltration (I&I) would be drastically > reduced as part of a parallel project. The City is over 100 years old and > some of the pipes are quite old and there is undoubtedly stormwater > intentionally (but undocumented / unknown) piped into the sanitary sewer > from long ago. Nobody in their right mind would predict a large % decrease > in I&I by virtue of a single fixit project in a City of this type. But they > did. > > So, we built a 3rd reactor recently. As we improve I&I, the DOE allows more > capacity but the rate of improvement is slow so we had to do this. > > Did the requirements change? No; not during construction because the > "requirements" were those assumed by the engineers. And yes; for the better > but not as fast as expected and planned for. And no; because the engineers > requirements didn't line up with the performance used by the DOH. So, the > requirements were faulty and a change in a moderate time frame was > necessary. > Actually, other changes were done on the first two batch reactors to make > them more productive - but, again, it was post-construction because that's > when the idea struck. Actually, I think the latter change was a matter of > requirements pull and realization that there was better technology > available. > > You freeze in order to keep out the inadvisable changes that would come if a > variety of personalities had free rein. So rather than diminishing the > value of the idea outright, it's best to consider that there might be good > purpose and application of the idea. > > Here's a context for it: > You have a bunch of folks designing something and some of them are really > bright but aren't very "common sense". They keep changing things. The > project stretches because of the spinoff affects of the changes. Little is > gained by the stretch - in fact, it costs money. So, you create some > hurdles for the changes to cross; you set some (perhaps arbitrary) time > points where there will be a "freeze" - which is a hurdle. > First changes are easy to make. > Then they are made a little harder to make. > Then they are made pretty hard to make. > Much can have to do with how "integrated" the thing is that's being changed > or how far into production it is or how many already exist in the field that > have to be maintained, etc. etc. I&I is always a problem. The sanitary lines under approximately 1500 slab-floor houses, including mine, were surreptitiously cracked just before the slabs were poured to provide drainage to avoid their getting damp from ground water. When it rains, it pours in. Since each community pays for its portion of operating expenses _and_sunk_capital_expenses_ in proportion of its flow, there is strong incentive for each participant to reduce its flow to the minimum. Expenditures for I&I reduction are sound investment. We could swap stories all week, but I think it's getting pretty remote for now. Jerry -- Engineering is the art of making what you want from things you can get. ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻArticle: 87549
Randy Yates wrote: > I'm not familiar with the spiral/agile model, but you'll have to > do some pretty fancy talking to convince me to agree that something > other than a strict development cycle flow is going to produce a better > system in the long run. Oh, it's still a strict development cycle... if anything, it's more strict than the big design up front. It just panders a little better to customers and, in my experience, handles changing requirements much better. > The problem I've seen is that, in the process of stumbling over one's > feet to get _something_ out that works, that the managers can > touchie-feelie-see, one ends up skimping or dumbing down the high > level design. Not at all. You don't dumb down the design, you just instantiate those parts of it that are clearest first. > Then, when that happens, the remainder of the project's > development is crippled, with the end result either that it took MORE > time than it would have taken had a proper high level design been > performed, or that the performance/maintainability/extendability of > the system is greatly comprimised. It depends on what you mean by "proper high level design" --- don't get me wrong, there is still abstract, forward-leaning thought involved. It's just that most designs I've seen try to get too stuck into details too early. > > Big up-front design can be a killer when requirements change. For some > > reason, managers and customers seem to think that software can be > > changed much more easily than mechanical systems... so they do. The > > trick is to figure out when they're really expressing a new > > requirement, or just jumping on the latest Big Thing. > > You can't have your cake and eat it too. Either take a hit sticking > to your requirements, or dumb-down your system. Actually, you can. That's part of the point of the spiral / agile methods: allow some flux in requirements (it will always be there), but don't make dumb decisions about it. Ciao, Peter K.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z