Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I'm looking for some information on implementing the D Algorithm for automatic test program generation (ATPG). Code samples, data structures, and anything else relating to the algorithm would be greatly appreciated. Sean CundiffArticle: 6901
Stephane BRETTE wrote: > Hi, > > I'm looking for a Vhdl synthesis tools for PC environment > under NT 4. Is there a best choice ???? > > Stephane Stephane, Our APS-X84 kits can run under NT and a VHDL synthesis and router package with VHDL tutorial can go for as little $1200.00 and comes with an FPGA test board with an on board socketed FPGA. We also have just started selling our APS-SynthALL Package which includes synthesis for many vendors including: Actel ACT1, ACT2, ACT3, ACT-32 EDIF Altera All devices EDIF AMD/Vantis MACH Devices DSL Lattice PLSI EDIF Lucent ORCA EDIF Quicklogic pASIC EDIF Xilinx 3K, 4K, 4Ke, 4Kex, 5K, 7K, 9K XNF, EDIF These kits are priced higher but include all the synthesis tools in one package. They do not include all the router(placement) tools, but we can package a single router -- for instance XILINX XACT/M1-- for very good prices. We also now have a true VHDL simulator package which can be purchased seperatly or included in the kits. Our website is at: http://www.associatedpro.com/aps -- ---------------------------------------------------------------- Richard Schwarz, President Associated Professional Systems (APS) EDA and Communications Tools http://www.associatedpro.com richard@associatedpro.com 410.569.5897 fx:410.661.2760Article: 6902
In article <MPG.e24e9c7dd0c4326989831@nntp.aracnet.com>, bob elkind <eteam.nospam@aracnet.com> wrote: >Just wanted to clarify/correct/highlight a few points... >Bill Sloman said... >> I don't know of any timing chips which would really do the whole job >> that you are asking for. The Analog Device AD9500 and AD9501 are >> effectively digitally programmable monostables which might serve as a >> beginning, and the Motorola MC100E195/6 are digitally programmable >> delay lines with a range of 2nsec. >Be warned up front that timing jitter directly translates to >loss of effective bits on the A/D front end. This is not a >game for the casual do-it-yourselfer. This is not necessarily the case for every application. In particular, if you are trying measure the amplitude of a square pulse, and you know where the pulse is going to be and the bandwidth of the front end is large enough so that the pulse still looks like a pulse, then a little jitter isn't going to cause any harm, especially when oversampling. >The implementation techniques for 'scope front ends are either >closely guarded trade secrets, or heavily patented "assets", as well >they should be. The investment in this technology is huge. Who says I'm maknig a scope? I might not even need a front end: a 50 ohm coax connected to a 6pF A/D converter has almost a 400MHz bandwidth. >Suffice it to say that you would have a very difficult time >developing a competitive "solution" from off-the-shelf technology. >The performance has to be so utterly consistent from die to die, >lot to lot, across temp/voltage. This is anathema to semi vendors >who depend upon "reasonable" tolerances to maintain saleable yield. A fun source for older scope design techniques using off the shelf technology is The Art and Science of Analog Circuit Design: a collection of articles edited by Jim Williams (there a two volumes that I know of, the first is ISBN 0-7506-9505-6). It includes some articles on T-coil peaking, transmission line amplifiers and CRTs, and front end attenuators and impedance converters. The 1GHz impedance converter described used a fet follower, bootstrapped to achieve temperature stability. The bootstrap circuit is AC coupled, so DC is handled separately with an OP-amp. The transmission line techniques were really cool. For example, to make a fast high power amplifier you might put a bunch of low power amplifiers in parallel- the problem is that the output capacitances will all end up in parallel too. To fix it, put inductors between the amplifiers. Each amplifier will now only see its own output capacitance, and those capacitances and the inductors make a transmission line (so you have to terminate it and feed the amplifiers with a matched transmission line so the delays are matched). You can also lower the node capacitance of deflection plates by using a whole bunch of them in parallel but seperated by inductors and terminated. T-coils can reduce the rise time of conventional amplifiers driving capacitive loads by almost 60%. None of this will compete with GaAs multi-chip modules, of course, but it is still impressive how far you can go with conventional descretes. -- /* jhallen@world.std.com (192.74.137.5) */ /* Joseph H. Allen */ int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2 ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 6903
In article <33B36C99.615E1C64@acte.no>, "Rune Bĉverrud" writes: > This is how it works, you might have to dig into your old trigonometry > school books :) > > 1) SUPPOSE you had a cosine waveform. > 2) Integrate it (an adder) - What do you get? A sine! > 3) Integrate the sine (another adder) - What do you get? A cosine! > 4) What happens if you feed 3) into 1)? You get an oscillator producing > both the sine and cosine at the same time! While digging the books, one could find sin(A+B) = sin(A) * cos(B) + cos(A) * sin(B) cos(A+B) = cos(A) * cos(B) - sin(A) * sin(B) So if you can tolerate 4 MULs and 2 ADDs per iteration, you can hardcode sin(B) and cos(B) for the desired frequency, and start off at sin(A)=0 and sin(B)=1. Each iteration of the formula (4xMUL 2xADD) gives the next sin/cos pair for your frequency. They run indefinitely. There's another technique, a sinusodial oscillator, which requires only one multiplication and an ADD per iteration. Init: y(-1)= 0 y(-2)= -A * sin W with A= Amplitude of Sinus W= 2*pi* Frequency / Samplerate Each sinus output is calculated as y(n) = 2* cos W * y(n-1) - y(n-2) I have not made a detailed comparison yet, but from first tests it seems that the first method is more exact, when using 16 bit fixed point arithmetic.Article: 6904
Wesley Webb <Wesley.Webb@dreo.dnd.ca> wrote in article <33C0EFE3.525D@dreo.dnd.ca>... > Hello, > > Does anyone know of a VHDL to EDIF translator which would work with the > Actel Designer 3.1? The Actel version is very poorly done and can't > create a decent netlist. Has anayone taken VHDL and been able to > program FPGAs using Actel Designer? Any lead would be greatly > appreciated. > > Thanks in advance. > > Wesley Webb > Summer Student > Defense Research Establishment - Ottawa(DREO), Canada > Wesley.Webb@dreo.dnd.ca > hi, i've used vhdl with actels and made chips that worked fine (a32200dx and a1460a's). no problems. i took the data, generated .wir files, and done the gate level simulations in viewlogic. after that, i imported that back into designer, did my place and route and static timing analysis, and then made the chips. i haven't seen any problems yet but would interested if there are any bugs in the system (so i can stay *far* away). good luck, rkArticle: 6905
Marc 'Nepomuk' Heuler <marc@aargh.mayn.de> wrote in article > While digging the books, one could find > > sin(A+B) = sin(A) * cos(B) + cos(A) * sin(B) > cos(A+B) = cos(A) * cos(B) - sin(A) * sin(B) > > So if you can tolerate 4 MULs and 2 ADDs per iteration, you can hardcode > sin(B) and cos(B) for the desired frequency, and start off at sin(A)=0 and > sin(B)=1. Each iteration of the formula (4xMUL 2xADD) gives the next > sin/cos pair for your frequency. If you take a Taylor's series expansion, you can eventually work these two equations into something that only requires shifts and adds to get to the next sin/cos pair. This is a commonly used technique that people writing circle drawing routines use. Since the Taylor's series is an estimate, you don't get dead-on estimates, but unless you're trying to design a function generator or something (in which case you'll probably use DDS these days? -- and lots of filtering...), it'll generally be good enough for government work. ---Joel KolstadArticle: 6906
Marc 'Nepomuk' Heuler wrote: > While digging the books, one could find > > sin(A+B) = sin(A) * cos(B) + cos(A) * sin(B) > cos(A+B) = cos(A) * cos(B) - sin(A) * sin(B) > > So if you can tolerate 4 MULs and 2 ADDs per iteration, you can hardcode > sin(B) and cos(B) for the desired frequency, and start off at sin(A)=0 and > sin(B)=1. Each iteration of the formula (4xMUL 2xADD) gives the next > sin/cos pair for your frequency. > > They run indefinitely. > > There's another technique, a sinusodial oscillator, which requires only one > multiplication and an ADD per iteration. > > Init: y(-1)= 0 > y(-2)= -A * sin W > > with A= Amplitude of Sinus > W= 2*pi* Frequency / Samplerate > > Each sinus output is calculated as y(n) = 2* cos W * y(n-1) - y(n-2) > > I have not made a detailed comparison yet, but from first tests it seems > that the first method is more exact, when using 16 bit fixed point > arithmetic. Do you have any idea of the stability and accuracy of these algorithms? An oscillator will usually have a loop gain >1, resulting in the oscillator 'taking off' and use all the bandwidth of the adder/integrator registers. It will usually limit itself with nasty clipping (chopping of the tops) when the registers no longer are large enough to hold the accumulated values. The original example I provided here actually does work, but I would love to find out if the loop gain could be set to exactly 1. If it could - this sin/cos generator could wipe out any lookup-table based generator easily, because of the much higher resolution. For instance, you could easily generate a new sin/cos pair at every clock cycle, 50MHz clock is no big deal, and you could have the resolution you want, like a 32 bit result. And you could easily do this in the smallest FPGAs available. Now - that would be something... Regards, Rune BaeverrudArticle: 6907
I have a got SERIOUS problem. I have got a PCB (already build up) with the following devices in a Slave serial Chain. 4005H pq240 4020E hq240 4003H pq208 4003H pq208 4005H pq240 4036EX hq304 I have got BIT files for all the devices. The Problem is that (according to Xilinx) there is NO way that I can create a single PROM file for this serial chain. The Xact 6.x software can NOT load a 4036EX bit file and the M1 (NT 4.0) software can NOT load a 4000H or 4000 bit file. Does anybody know how to merge these files externally ? From what I understand, there are some extra Pre-ambles in the PROM file for a Serial Chain. PS. How could Xilinx NOT have forseen this problem scenario ??? Thanks Piet ______________________ E-mail: pdtoit@csir.co.zaArticle: 6908
Hello net world are you a beer drinker or maybe a home brewer than this web page is for you! My web page is dedicated to home brewing and beer on the net! If this interests you than go to Jake's Home (brew) Page it is located at http://www.net-link.net/~jtsnake/ I am looking forward to hearing from you soon!!! Kris Jacobs Jake's Home (brew) Page http://www.net-link.net/~jtsnake/ E-Mail To: jtsnake@net-link.net jtsnake@serv01.net-link.net mpinc@SERV01.NET-LINK.NETArticle: 6909
Are the 4000E series FPGAs PCI compatible if the outputs are configured for TTL levels? Or must they be configured for CMOS levels? It looks like they're compatible either way, but I thought I'd check. -- /* jhallen@world.std.com (192.74.137.5) */ /* Joseph H. Allen */ int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2 ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 6910
Does anybody know where I can find an on-line SPICE tutorial? I haven't had any success using the standard search engines. Please e-mail me any URLs you may know of; I dont have time to search newsgroups. Thanks! Sandip Dasgupta dipster@mail.utexas.eduArticle: 6911
Rune Bĉverrud wrote: > > Marc 'Nepomuk' Heuler wrote: > > > While digging the books, one could find > > > > sin(A+B) = sin(A) * cos(B) + cos(A) * sin(B) > > cos(A+B) = cos(A) * cos(B) - sin(A) * sin(B) > > > > So if you can tolerate 4 MULs and 2 ADDs per iteration, you can hardcode > > sin(B) and cos(B) for the desired frequency, and start off at sin(A)=0 and > > sin(B)=1. Each iteration of the formula (4xMUL 2xADD) gives the next > > sin/cos pair for your frequency. > > > > They run indefinitely. > > > > There's another technique, a sinusodial oscillator, which requires only one > > multiplication and an ADD per iteration. > > > > Init: y(-1)= 0 > > y(-2)= -A * sin W > > > > with A= Amplitude of Sinus > > W= 2*pi* Frequency / Samplerate > > > > Each sinus output is calculated as y(n) = 2* cos W * y(n-1) - y(n-2) > > > > I have not made a detailed comparison yet, but from first tests it seems > > that the first method is more exact, when using 16 bit fixed point > > arithmetic. > > Do you have any idea of the stability and accuracy of these algorithms? > An oscillator will usually have a loop gain >1, resulting in the > oscillator 'taking off' and use all the bandwidth of the > adder/integrator registers. It will usually limit itself with nasty > clipping (chopping of the tops) when the registers no longer are large > enough to hold the accumulated values. > > The original example I provided here actually does work, but I would > love to find out if the loop gain could be set to exactly 1. If it could > - this sin/cos generator could wipe out any lookup-table based generator > easily, because of the much higher resolution. For instance, you could > easily generate a new sin/cos pair at every clock cycle, 50MHz clock is > no big deal, and you could have the resolution you want, like a 32 bit > result. And you could easily do this in the smallest FPGAs available. > Now - that would be something... > > Regards, > Rune Baeverrud No one has heard of the CORDIC algorithm?Article: 6912
> Not true at all - VeriBest offers a complete design environment > including schematic and graphical (state tables/flowcharts etc) > capture. > Code generated is synopsys compatiable and can also include > appropriate > compiler directives if desired. This is a great advantage for those > FPGA > designers who have yet to make a full transition to HDL, and improves > the efficiency and documentation/de-bugging of experienced HDL > designers. A tightly integrated environment that includes project > management, design capture, hdl simulation (behavioural and gate > level), > graphical testbench generation and tight integration with vendor place > and route tools has many advantages to a designer, when compared to > using Synopsys's FPGA-Express as a stand alone product. Furthermore, > (particularily in the case of fixed pin designs), pcb layout, board > level simulation, and FPGA design can all occur concurrently, allowing > a > very efficient design cycle to catch the ever decreasing market > window. > [M.Vorbach] Nice statement, and I guess every professional user to take Synopsys FPGA-Express. It is a great tool, but not in combination with any VeriBest software. My advice is USE ViewLogic VCS! The tool works, the user interface is great and looks great, support is very good and professional users can run VCS using the command line. I tried that at the VeriBest tool -> no way and no support. The VeriBest user interface looks cheap and is not NT4.0 like. We have in house programmers which write C/C++ and PERL - and believe me, this guys are good - but they wondered a lot of times about the quality of VeriBest (in the negative meaning). I´m shure, it is the wrong way to collect different tools from a lot of companies and put them together without enough knowlege, this will lead to the bug collections I wrote. A lot of VeriBest users are very angry about the support and quality of this tools and I know that VeriBest lost a lot of (sometimes big) companies in germany, because lots of hard errors was reported, but VeriBest never solved this problems. Since the last release of schematics and PCB seems a lot of problems to be solved, that is true (we work with this tools yet, so I think I know what I´m saying); but I cannot believe in VeriBest FPGA-tools any longer. Further the projection of ViewLogic is to far, that I do not believe, that VeriBest will get a chance soon. The next question is, why should somebody use the VeriBest simulator. VCS is meanwhile a standard for ASIC-designs! And runs fast and stable. Another thing is, that VeriBest claims (and promises personally to me) a lot of things which were definitively not true! So I cannot trust any longer. > As our company is a reseller of VeriBest software in Australia, I have > had the opportunity to evaluate their FPGA Express software, having > completed several designs from concept -> gate level simulation. I > have > been extremely pleased with the solid integration, ease of use of the > software, and of course the excellent architectural specific > synthesis. > [M.Vorbach] Fine, Philip is talking about Synopsys FPGA-Express. This is really not an achievment of VeriBest. > Yes, I am involved with selling VeriBest software, but I am not an > employee of VeriBest. I am a professional, who would not put their > name > behind a product unless I believed it was truely good. Most of your > comments (Robert/Martin) are outdated and in no way represent > VeriBest's > current products. > [M.Vorbach] I do not think, that I am outdated, my experience with VeriBest tools and (this is important) the bad support is just 8 weeks old and -how said - we still use some tools (the not as buggy ones). I do not earn money by talking about my experience and give some advises. And I´m shure, that we (Robert and me) represent the mood of most VeriBest customers (too many changed to other EDA-tools and lost a lot of money by doing so). Also I´m shure, that Philips opinion must be pro-VeriBest, because if VeriBest losts customers or customers in spe, Philip will earn no or less money. I do not earn more, if customers doing so, but they will protect their money and do not waste expensive development time! [M.Vorbach] Best regards Martin m.a.vorbach@ieee.org Fon +49 721 97243 35 Fax +49 721 97243 28 >Article: 6913
I would like to apologise to this newsgroup and everyone who reads this newsgroup!!! I promise never to post or send spam to this or any other newsgroup that does not pertain to my posting!!! Please accept my humble apology and again I will never post spam here again!!! Thank You!!! Andrew Schero yank714@kalnet.netArticle: 6914
> -----Original Message----- > From: P Nibbs [SMTP:pnibbs@icd.com.au] > Posted At: Monday, July 07, 1997 2:22 AM > Posted To: fpga > Conversation: Verilog Simulation and Synthesis for FPGA Devices > Subject: Re: Verilog Simulation and Synthesis for FPGA Devices > > [Robert M. Münch] ... a lot of marketing stuff deleted ... > As our company is a reseller of VeriBest software in Australia, I have > had the opportunity to evaluate their FPGA Express software, having [Robert M. Münch] Hey it's not THEIR fpga software it's just a licensed 3rd party tool, so no Veribest developer is envolved in it (that's why it does what it's supposed to be). > completed several designs from concept -> gate level simulation. I > have > been extremely pleased with the solid integration, ease of use of the > software, and of course the excellent architectural specific > synthesis. [Robert M. Münch] Hm... perhaps you should have a look at some other tools (I hope you have done befor becoming a Veribest reseller) to see solid integration and ease of use. Did you do a 100K gate desing with it yet? > Yes, I am involved with selling VeriBest software, but I am not an > employee of VeriBest. I am a professional, who would not put their > name > behind a product unless I believed it was truely good. Most of your > comments (Robert/Martin) are outdated and in no way represent > VeriBest's > current products. [Robert M. Münch] Maybe, but they have lost, at least for our project. > In-Circuit Design Pty Ltd Ph: +61 3 9205 9595 > VeriBest Solutions Centre Fax:+61 3 9205 9410 [Robert M. Münch] Solutions Center, hey man you must be really good at teaching workarounds... we didn't find a lot to get the stuff work ;-)) Nevertheless, people should take what the like and believe is best for them. As we don't earn our money with this kind of stuff, these are only our experience haveing tried Veribest on a big project.... Robert M. Muench SCRAP EDV-Anlagen GmbH, Karlsruhe, Germany ==> Private mail : r.m.muench@ieee.org <== ==> ask for PGP public-key <==Article: 6915
Hi, I'm looking for a Vhdl synthesis tools for PC environment under NT 4. Is there a best choice ???? StephaneArticle: 6916
R> > Marc 'Nepomuk' Heuler wrote: [stuff deleted] > > > There's another technique, a sinusodial oscillator, which requires only one > > > multiplication and an ADD per iteration. > > > > > > Init: y(-1)= 0 > > > y(-2)= -A * sin W > > > > > > with A= Amplitude of Sinus > > > W= 2*pi* Frequency / Samplerate > > > > > > Each sinus output is calculated as y(n) = 2* cos W * y(n-1) - y(n-2) > > > > > > I have not made a detailed comparison yet, but from first tests it seems > > > that the first method is more exact, when using 16 bit fixed point > > > arithmetic. > > [stuff deleted] > > The original example I provided here actually does work, but I would > > love to find out if the loop gain could be set to exactly 1. If it could > > - this sin/cos generator could wipe out any lookup-table based generator > > easily, because of the much higher resolution. For instance, you could > > easily generate a new sin/cos pair at every clock cycle, 50MHz clock is > > no big deal, and you could have the resolution you want, like a 32 bit > > result. And you could easily do this in the smallest FPGAs available. > > Now - that would be something... > > I'd love to see your 32 bit multiplier design that fits in the "smallest FPGA's available" and for which a 50MHz data clock is "no big deal". Now THAT would be something! Of course, if you used a bit-serial multiplier, it would be small and the bit clock could easily be run at 50MHz, but 32 bit inputs is going to give you a data rate of something less than 1 MHz. Alternatively, if you use the CORDIC algorithm you don't need any multipliers, only adders and shifters. CORDIC is an algorithm that incrementally performs rotations in either a circular, hyperbolic or linear space using only shifts and adds. The secret is the rotation angle at each iteration is chosen so that it's arctangent is a negative power of two. At each iteration a decision is made on which direction to perform the rotation (rather than whether or not to rotate), so the cosine terms drop out as constant gain factors. I've done numerous CORDIC designs in FPGAs including a bit-serial one that only occupies 21 CLBs of an XC4000 series part and produces a 16 bit result in less than 2 uS. Another design is a 14 bit wide unrolled pipeline CORDIC processor used for a quadrature NCO. That design provides simultaneous A*sin(w) and A*cos(w) with a phase resolution of better than six bits and accuracy to 12 bits in less than half of an XC4013. It works at better than a 50M complex data pairs/sec. A numerically controlled oscillator can be constructed using a simple accumulator to integrate the the desired delta-phase which is then passed to a CORDIC sin/cos processor's phase angle input. The phase accumulator maximum value is defined as 2 pi, so overflow becomes a non-issue. I've got more info on CORDIC processors in a paper about high performance bit serial design. That paper can be found on my website. I'm also in the process of trying to publish a paper surveying CORDIC designs for FPGAs. -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://www.ids.net/~randrakaArticle: 6917
Ray Andraka wrote: > I'd love to see your 32 bit multiplier design that fits in the "smallest > FPGA's available" and for which a 50MHz data clock is "no big deal". There are absolutely no multipliers there - only two adders - that's all! Divide operations are performed by looking at the most significant bits. This approach limits the number of possible frequencies you can get with a given system clock. This is not CORDIC - it's even simpler - and it's fast - you get a new sin/cos pair at every system clock. Generating a pair of 8-bit sin/cos values could consume as little as 20 logic cells, and have an output word rate of 100MHz. I'm not saying that this approach is better or worse than anything else, but I bet there would be some uses for it. I would love to see this principle applied in a low distortion oscillator. > passed to a CORDIC sin/cos processor's phase angle input. The phase > accumulator maximum value is defined as 2 pi, so overflow becomes a > non-issue. This is all very interesting. There is not to much material available on the subject on the Internet. I would love to see some sample projects and implementation details, but I have not found anything so far. > I've got more info on CORDIC processors in a paper about high > performance bit serial design. That paper can be found on my website. I've already collected that paper form you web site, and I look very much forward to read it this upcoming weekend! > I'm also in the process of trying to publish a paper surveying CORDIC > designs for FPGAs. I'm waiting! When is it due? Please hurry finish it! :) Regards, Rune BaeverrudArticle: 6918
Rune Bĉverrud wrote: <omitted> > Do you have any idea of the stability and accuracy of these algorithms? > An oscillator will usually have a loop gain >1, resulting in the > oscillator 'taking off' and use all the bandwidth of the > adder/integrator registers. It will usually limit itself with nasty > clipping (chopping of the tops) when the registers no longer are large > enough to hold the accumulated values. Prepairing two sets of parameters,the loop gain of one is slight greater than 1 and that of the other is slight less than 1,and switching them in wathcing the absolute power of the signal( sin*sin+cos*cos or y(n)* y(n)+k*(y(n)-y(n-1))*(y(n)-y(n-1)) ) will give the acceptable quality. > > The original example I provided here actually does work, but I would > love to find out if the loop gain could be set to exactly 1. If it could > - this sin/cos generator could wipe out any lookup-table based generator > easily, because of the much higher resolution. For instance, you could > easily generate a new sin/cos pair at every clock cycle, 50MHz clock is > no big deal, and you could have the resolution you want, like a 32 bit > result. And you could easily do this in the smallest FPGAs available. > Now - that would be something... > > Regards, > Rune BaeverrudArticle: 6919
I guess I dont understand where your problem is.. any tool which creates solid actel based EDIF can be used as a front end to ACTELS Designer 3.1 place and route tool. I have used a number of tools to compile my VHDL into EDIF, all lack something..... ACTmap creates EDIF under the ACTEL Designer 3.1 banner. I have used it succesfully. I have also used Simplicity to do the same, great tight output design, quick compilation, but it comes at a price ($$$$$). I have also used SYNARIO, ORCAD, etc., etc. Wesley Webb <Wesley.Webb@dreo.dnd.ca> wrote: >Hello, > >Does anyone know of a VHDL to EDIF translator which would work with the >Actel Designer 3.1? The Actel version is very poorly done and can't >create a decent netlist. Has anayone taken VHDL and been able to >program FPGAs using Actel Designer? Any lead would be greatly >appreciated. > >Thanks in advance. > >Wesley Webb >Summer Student >Defense Research Establishment - Ottawa(DREO), Canada >Wesley.Webb@dreo.dnd.caArticle: 6920
> Probably the most stable method (avoids accumulated offsets from > rounding errors) is to accumulate phase in an device that that adds a > user selected delta phase to a total at fixed intervals(clock) and > applies the count to a sine/cos lookup prom. You can achieve arbitrary > precision this way. I haven't seen the initial post, so I am not sure of > desired frequencies. > -- > > It is better to keep one's mouth closed and be thought a fool, > than to open it and remove all doubt. Abraham Lincoln > I really have to start listening to Abe. ;>) > > Hank McCall Just a thought, but this might be overkill. Since a square wave is made up of an infinite number of sines and harmonics of the fundamental freq, you could filter a square wave to pass only the fundamental frequency, producing the sinewave. (I think). Purely digital would need a digital filter (which in turn needs an simple anti aliasing analog). Therefore this may be of no use to you. I thought the concept was neat though. SteveArticle: 6921
Hi All, I've been asked to provide some references for the ideas that has been presented on this subject. The sin/cos by integration ideas are collected from a paper by Ken Chapman, Xilinx Ltd, UK. The paper has the title "Performance and Resolution of Distributed Arithmetic Techniques Unlocks Potential of Digital Integration", and was presented at DSP'97 Scandinavia, at where I attended the technical conferences. The DSP'97 Scandinavia was arranged Badger Events, Ltd. from where you would probably be able to obtain a copy of the seminar handout. http://www.dsp-europe.co.uk/scandinavia/index.htm Tel: +44 181 547 3947. I apologize for not providing further references on this idea on my previous postings here. I was not my intention to step on anyones toes, and my personal apology goes to Ken Chapman for this. The credits for this genius idea should go to Ken Chapman, who claims to have a 100% working IP core using the described technique. Regards, Rune Baeverrud FreeCore Library http://193.215.128.3/freecoreArticle: 6922
A William Sloman wrote: > > Joseph H Allen wrote: > > > In article <MPG.e24e9c7dd0c4326989831@nntp.aracnet.com>, > > bob elkind <eteam.nospam@aracnet.com> wrote: > > > > <snip> > > > The transmission line techniques were really cool. For example, to > > make a > > fast high power amplifier you might put a bunch of low power > > amplifiers in > > parallel- the problem is that the output capacitances will all end up > > in > > parallel too. To fix it, put inductors between the amplifiers. Each > > amplifier will now only see its own output capacitance, and those > > capacitances and the inductors make a transmission line (so you have > > to > > terminate it and feed the amplifiers with a matched transmission line > > so the > > delays are matched). > > This begins to sound like Percival's distributed amplifier, where the > inputs tothe amplifiers, and the outputs from the amplifiers go to taps > on an > input and an output transmission line respectively. > > Cherry's textbook liked it a lot - how to get unlimited gain from > amplifiers of > finite gain-bandwidth product. I met Percival when I worked for EMI from > > 1976 to 1979, but never got to talk to him about that particular > invention. > > > > > > > You can also lower the node capacitance of deflection plates by using > > a > > whole bunch of them in parallel but separated by inductors and > > terminated. > > > > T-coils can reduce the rise time of conventional amplifiers driving > > capacitive loads by almost 60%. > > > None of this will compete with GaAs multi-chip modules, of course, but > > it is > > still impressive how far you can go with conventional discretes. > > My project at Cambridge Instruments used a bunch of 5GHz npn and pnp > transistors in spots where the Gigabit Logic GaAs wouldn't hack it. > > Bill Sloman, Nijmegen The most amazing feature of distributed amplifiers was that you could use devices that actually had a gain of less than unity at the desired frequency. When we had low freq. cutoffs in vacuum tubes, we still had hi-freq. amplifiers(1940's and 1950's). A subtle, but interest (to some of us) point is that to get the greatest gain with the least gain elements (tubes or transistors) you designed each block for a gain of E (2.71828...) and then cascaded those blocks. -- It is better to keep one's mouth closed and be thought a fool, than to open it and remove all doubt. Abraham Lincoln I really have to start listening to Abe. ;>) Hank McCallArticle: 6923
Rune B=E6verrud wrote: > = > Hi All, > = > This is probably the simplest way possible you could make a Sine/Cosine= > generator, and it is extremely appealing to a digital logic > implementation, because it only requires two adders! > = > This is how it works, you might have to dig into your old trigonometry > school books :) > = > 1) SUPPOSE you had a cosine waveform. > 2) Integrate it (an adder) - What do you get? A sine! > 3) Integrate the sine (another adder) - What do you get? A cosine! > 4) What happens if you feed 3) into 1)? You get an oscillator producing= > both the sine and cosine at the same time! > = > This is also one way you could implement an oscillator in the analog > world - by cascading two integrators and feed the output from the secon= d > integrator into the input of the first one. > = > In the analog world, the oscillator would start because of some noise o= r > drifting in the op-amps used. The loop gain would have to be larger tha= n > 1 for the oscillator to reach full amplitude, with some > clipping/distortion as result. > = > In the digital world, there would be no signal noise, so the oscillator= > would have to be started by preloading the integrators with a fixed > value. > = > NOTE: If the loop gain could be made to be exactly 1 - then there would= > be no clipping/signal distortion! > = > Of course there are some coefficients to consider when integrating, but= > the divide operations could be performed by looking only at the most > significant bits - a divide by 2^N requires ABSOLUTELY NO LOGIC! > = > Have a look at the pseudo code below for the implementation of this > algorithm, assuming both constants A and B are integers of value 2^N: > = > var SinReg, CosReg, tmp: Longint; > SinOut, CosOut: Output; > Constant A, B: Integer; > = > while (1) do begin > tmp :=3D SinReg; > SinReg :=3D SinReg + (CosReg div A); > CosReg :=3D CosReg + (tmp DIV -(A)); > = > SinOut :=3D SinReg div B; > CosOut :=3D CosReg div B; > end; > = > This actually works, but the loop gain is >1 so it will start clipping > after oscillating for a while. > = > Now - I don't have the mathematical skills to produce a theory on this > principle, or choosing the right parameters for a really low distortion= > oscillator. I was hoping that some of you out there would grab this > thing and improve the algorithm! This could be the perfect thing for > implementation in an FPGA/CPLD! > = > I have written a small program (an .EXE file) which you could download > to check out the algorithm. Source is included. This can be downloaded > from http://193.215.128.3/freecore > = > Regards, > Rune Baeverrud Probably the most stable method (avoids accumulated offsets from rounding errors) is to accumulate phase in an device that that adds a user selected delta phase to a total at fixed intervals(clock) and applies the count to a sine/cos lookup prom. You can achieve arbitrary precision this way. I haven't seen the initial post, so I am not sure of desired frequencies. -- = It is better to keep one's mouth closed and be thought a fool, than to open it and remove all doubt. Abraham Lincoln I really have to start listening to Abe. ;>) Hank McCallArticle: 6924
hi, are there any predefined Verilog-Modules for asynchrounos FIFO's in Xilinx 4xxx?? thanks ...juerg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z