Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
>I was just wondering if also other peoples in this group get private mails >from Aman Mediratta, asking for technical support with EDK, etc. For me it >looks like he is mass-mailing his support-questions. Also a way to get >support... I would call it "support-spam". (I tried to shortly explain him >that there are newsgroups for this yesterday, just to get another mail >today). I got 2 on Jul-29 and another on Aug-11th. The last was 6 megabytes. I didn't have time to be polite like Austin or Phillip. I'm generally happy to receive email if it's related to something I've posted. The stuff I got from Aman looked like he wanted help with his homework. I have no idea why he picked me. I don't know anything about the topic or "forum for virtex 2 pro" that he harvested my address from. My guess it that he harvested my address from a "forum" that was gateway from this newsgroup. There was no hint in his mail that he had read anything I had posted. (See sig below.) I get especially pissed off when the subject says "urgent". (It's not urgent for me.) Doubley so when it comes from a free mail site so I can't contact his boss/sysadmin to give him a clue. His email said "student of Electronics and Electrical Engg. Dept at Indian Institute Of Technology kharagpur". Anybody got a contact there who can sort this out? I suspect an instructor or TA gave bogus advice. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 88801
Me too, I've had two or three in the last couple of weeks. Ben "Hal Murray" <hmurray@suespammers.org> wrote in message news:YLednWqP286nMY_eRVn-tQ@megapath.net... > >I was just wondering if also other peoples in this group get private > >mails >>from Aman Mediratta, asking for technical support with EDK, etc. For me it >>looks like he is mass-mailing his support-questions. Also a way to get >>support... I would call it "support-spam". (I tried to shortly explain him >>that there are newsgroups for this yesterday, just to get another mail >>today). > > I got 2 on Jul-29 and another on Aug-11th. The last was 6 megabytes. > > I didn't have time to be polite like Austin or Phillip. > > I'm generally happy to receive email if it's related to > something I've posted. The stuff I got from Aman looked like > he wanted help with his homework. I have no idea why he picked me. > I don't know anything about the topic or "forum for virtex 2 pro" > that he harvested my address from. My guess it that he > harvested my address from a "forum" that was gateway from > this newsgroup. > > There was no hint in his mail that he had read anything I had posted. > (See sig below.) > > I get especially pissed off when the subject says "urgent". > (It's not urgent for me.) Doubley so when it comes from a free > mail site so I can't contact his boss/sysadmin to give him a clue. > > His email said "student of Electronics and Electrical Engg. Dept > at Indian Institute Of Technology kharagpur". Anybody got a contact > there who can sort this out? I suspect an instructor or TA > gave bogus advice. > > -- > The suespammers.org mail server is located in California. So are all my > other mailboxes. Please do not send unsolicited bulk e-mail or > unsolicited > commercial e-mail to my suespammers.org address or any of my other > addresses. > These are my opinions, not necessarily my employer's. I hate spam. >Article: 88802
I am having trouble to program a XCF08P on my Avnet V4LX-EVL-25 evaluation board. The Platform Flash is a ES (Engeneering Sample) version that does not allow programming speeds below 3MHz. Cable setup for my PC4 is 5MHz in Xilinx Impact, but my scope tells me output of my cable is just 2.5MHz. Any Idea to speed things up?Article: 88803
Hi folks, my name is Jens and I am student of the Technical University Berlin. Through my course of study in microelectronics VHDL design is becoming my favorite hobby. My other interests are signal processing and compression in general. Lately I purchased an FPGA Evalution board second-hand (guess where?) and I am now starting my first "private" implementations. Just to give you a short intro... ;-) I am interested in implementing compression algorithms using VHDL on an FPGA. I want to build a data transmission system that compresses portions of the incoming data (not the whole data but "frames" of like 800 bytes) on-the-fly. In my search for a fast (i.e. real-time capable at a "desired" data rate of - let's say - 300 MHZ?) "universal" compression scheme I came across the following stepping stones: - is there any free example code for compression algorithms available in VHDL to get an overview and a first impression of implementation complexity? - what would you think are the most promising algorithms for my purpose (i.e. when statistics and semantics of the input data are unknown), first of all I thought of delta encoding, sorted RLE, LZ, ....? - as the input data is unknown the álgorithm must be lossless, reducing redundancy (if possible), not irrelevancy. what are the theoretical limits of "universal" compression, not emphazizing one particular statistical property (like similar by values in succession)? - what is meant by the keyword "systolic implementations" and "pipeling" in that particular context? I came across that very often lately - what if my code gains different compression ratios for consecutive data portions? surely a FIFO can decouple input and output rate but eventually the FIFO will underflow? Thanks for you help + support in advance, any comments, hints and help is appreciated! Bye Jens P.S.: I'm looking for the standard works "Sayood, Khalid: Introduction to data compression, Academic Press, 199x or 200x" and/or ". Salomon: Data Compression, Springer-Verlag, New York, 200x". Are there any sources of an electronic copy (ps, pfd, etc.) or transcriptions?Article: 88804
> - as the input data is unknown the álgorithm must be lossless, reducing > redundancy (if possible), not irrelevancy. what are the theoretical limits > of "universal" compression, not emphazizing one particular statistical > property (like similar by values in succession)? > Hi Jens, I am not an expert in this field, but I think there is no guarenteed compression-rate at all for real random data, so the answer for your question would be 0%. Otherwise you could compress something, then compress it again and again.... However, I think in real life data there is often some kind of redundancy. Regards, ThomasArticle: 88805
well its an interesting follow on... the more cores.. the better the floating point.. I think that's the aim here... but I'm surprised no one has mentioned the cell... it has 9 processors on board. Simon "Tommy Thorn" <foobar@nowhere.void> wrote in message news:AvxQe.10949$p%3.42685@typhoon.sonic.net... > mk wrote: > > the problem with highly threaded cpus is that they are not very good > > at running wordprocessors, spreadsheets and fpga p&r tools and that's > > where most cpus are used > > Actually, that's rather to generous to us fpga p&r users. If instead > you do take a minute to look at the kind of benchmarks that are always > put forward to test the latest iteration of x86, you'd find > predominately parallel friendly workloads (eg. anything "multimedia" and > to a moderately degree games). Spreadsheets are inherently not parallel > hostile and performance of wordprocessors are not really an issue. > > I believe John's point was that the x86, RISC, and all the rest of the > inheriently sequential architectures will never scale and we should > abandon them. It's an old war cry that we've been hearing on and off > for decades, but recent Hotchips 17, not least David Kirk's keynote, > really felt like the wind of change. Power efficiency has become a > mainstream concept and it will slowly but surely require us to change > our ways. > > Back in the real world, John, people have sizable investments in > existing (x86) software which aren't going away anytime soon, but the > change is coming: on-chip multi cores, Cell, xbox-360, Niagria. > > Eh, isn't this the "Not enough parallelism in programming" on comp.arch? > What has this got to do with "Best FPGA for floating point > performance" :-) > > Tommy >Article: 88806
Thanks to all for the suggestions. The ground pins were connected by threading wires through the holes, and soldering them to the ground plane before fitting the PLCC socket. Personally, I now think the monolithic regulator supplying U4 is the main noise source. Thanks, Daniel. I've been pouring over Rohde and Egan, trying to get my head round spectral density stuff. I'd appreciate it if someone could check my math. I've calculated the equivalent noise voltage that would have to be injected at the loop filter input (U4 output) to produce -95 dBc/Hz at a 1 KHz offset on the VCO. The following script was executed in SCILAB: // Closed-loop gain from PFD output to VCO output pd_2_vco = h/kpd/kn; // -95 dBc/Hz theta_rms = sqrt(10^(-95/10) * 2); // Equivalent noise injection at PFD output (1 KHz offset) v_rms = theta_rms / horner(pd_2_vco, 2*%pi*1000) This gives 20nV/sqrt(Hz). Since U4 output is a 50% duty cycle square wave(XOR PFD), presumably I would still only need 40nV/sqrt(Hz) on the regulator output? Is 40nV/sqrt(Hz) at 1 KHz credible for a 78L05 with 100n + 10n hanging off its output? Script notes: h = Closed loop gain kpd = PFD gain kn = Divider gain horner() returns magnitude of transfer function at specified freqArticle: 88807
Andrew Holme wrote: > regulator output? Is 40nV/sqrt(Hz) at 1 KHz credible for a 78L05 > with 100n + 10n hanging off its output? The 78L05 datasheet quotes an output noise voltage of 40uV for 10Hz <= f <= 100 KHz with a minimum recommended load capacitance of 10n. I presume this means 40uV peak-to-peak? I'm not sure how to convert this to RMS nV/sqrt(Hz), but 40e-6/sqrt(100e3) = 1.26e-7 which is only 3 times my figure.Article: 88808
Andrew Holme wrote: > // Closed-loop gain from PFD output to VCO output > pd_2_vco = h/kpd/kn; Sorry, I think that was wrong. That h was the closed-loop gain from ref input to vco output. A safer way to calculate it, using the SCILAB /. operator is: t_pd_vco = (f * kvco/s) /. (kpd * kn); where f = loop filter transfer function. Now I get v_rms = 793 nV/sqrt(Hz) Hmm....Article: 88809
Andrew Holme wrote... > Andrew Holme wrote: >> regulator output? Is 40nV/sqrt(Hz) at 1 KHz credible for a 78L05 >> with 100n + 10n hanging off its output? > > The 78L05 datasheet quotes an output noise voltage of 40uV for > 10Hz <= f <= 100 KHz with a minimum recommended load capacitance > of 10n. I presume this means 40uV peak-to-peak? Hah, it's likely rms, because that leads to a much smaller number. Also, the NEC and Linfinity datasheets explicitly say rms. The NSC datasheet note says, "minimum load capacitance of 0.01µF to limit high frequency noise," which means the output noise is not white, which means you can't perform the usual simple sqrt-BW calculations to obtain the noise density. -- Thanks, - WinArticle: 88810
Hello guys, I am integrating a IP which is available has EDF netlist format. I am facing some problem in this IP. So in order to debug the IP i have to monitor certain internal signal of the IP. I am synthesizing my top module which contain IP has black box and other supporting logic. I then insert the chipscope module to this netlist (top module netlist) but in the chipscope I am not able to see the internal IP signal, this may be due to IP core being a black box. Can i preserve the signal which i monitor through chipscope. I don't want to change the EDF file as it is tedious job. I am using synplify for synthesis. Waiting for your reply Thanks and regards WilliamsArticle: 88811
Hi All, I have implemented a AHB-PCI IP in the virtex2pro FPGA. While accessing through ARM JTAG AMBA side is okay , but PCI is not working. I basically want to know what are the basic steps to check the PCI interface . Is there any specific procedure or checklist ? Waiting for your reply Thanks and regards PraveenArticle: 88812
Tommy Thorn wrote: > mk wrote: > > the problem with highly threaded cpus is that they are not very good > > at running wordprocessors, spreadsheets and fpga p&r tools and that's > > where most cpus are used > > Actually, that's rather to generous to us fpga p&r users. If instead > you do take a minute to look at the kind of benchmarks that are always > put forward to test the latest iteration of x86, you'd find > predominately parallel friendly workloads (eg. anything "multimedia" and > to a moderately degree games). Spreadsheets are inherently not parallel > hostile and performance of wordprocessors are not really an issue. > dittos > I believe John's point was that the x86, RISC, and all the rest of the > inheriently sequential architectures will never scale and we should > abandon them. It's an old war cry that we've been hearing on and off > for decades, but recent Hotchips 17, not least David Kirk's keynote, > really felt like the wind of change. Power efficiency has become a > mainstream concept and it will slowly but surely require us to change > our ways. > Yes I was around the 1st time (Inmos) in early 80s, toooo early to cry wolf.. Yes, SRAM at mini DRAM sizes is a really stupid idea esp if its only a 2 or 3 times faster (than RLDRAM that is), interleaving across banks is the way to go forward but even more is needed.... SRAM power is many orders higher than DRAM cell leakage but both can have the same effective throughputs see QDR SRAM v RLDRAM if threading is used. SRAMs real value lies in having lots of teeny weeny blocks (BlockRams), its the total parallel bandwidth that FPGAs have that shines. Right now the cpu vendors are looking ever more stupid by serializing all their SRAM accesses through one MMU pipeline even if it runs at 1 or 2GHz to L1 (1ns?) and less for L2 (4ns?). > Back in the real world, John, people have sizable investments in > existing (x86) software which aren't going away anytime soon, but the > change is coming: on-chip multi cores, Cell, xbox-360, Niagria. > > Eh, isn't this the "Not enough parallelism in programming" on comp.arch? > What has this got to do with "Best FPGA for floating point > performance" :-) > > Tommy Yes precisely, precisely, dittos... I will have to look up the Kirk keynote. Back in the real world, of course x86 is not going anywhere different for some time. It doesn't matter one iota that MTA architecture isn't any good at office SW although I would actually argue thats not even true. I have written EDA and text editers and can see plenty of concurrency possibilities there too but not practical in C though, but perfectly practical in Occam with C. Serializing software has always been about being lazy and not having the right language. If people were familiar with BeOS/Haiku they would know what a multithreaded OS looks like too, it wallops any MS or Linux lookylike in responsiveness, but it never had many apps. It used C++ but thats not the right language for threaded apps at a fine grain, only course grain. Next generation MS Vista is demanding a 3GHz Pentium in order to be useable yet all its going to deliver is crome and shiny effect (both of which are highly parallelizable hehe). Most of the simple user improvements in W2K I want to see will never come. I can see that shinyness in PC GUIs is just another form of multimedia graphics. The reason the post is on this side is that FPGAs are the only practical way to demonstrate other comp arch ideas with out access to $100M VLSI/ASIC model. The upside of FPGAs is that a larger FPGA can demonstrate lots of smaller cpus but only upto the 300MHz clock wall and only for simpler RISC designs. After demonstration, there is no reason why FPGA processor can't be ASICed at about 3x the speed, the limit being in my case a dual port BlockRam equiv std cell which Samsung has at 1GHz. FPGAs can freely use any available memory technology, esp these RLDRAMs with high issue rates that are better than L2 SRAM access times. Most all DSP and Networking people fully understand why hardware is threaded at all levels, its starts with EE101, but in CS101 its 90* sequential, the comp arch + user world people don't quite get it yet. The downside of FPGA for the HPC crowd is the FPU limit, it makes it impossible for any FPGA processor to match some of the spectactular FP chips out there but then I never much used FP in software either. Now if an 18.18 mul had a freeish FPU serial controller around it, it might be very useable for a somewhat serial MTA processor in spades. johnjakson at usa ... transputer2 at yahoo..Article: 88813
Jens Mander wrote: > I am interested in implementing compression algorithms using VHDL on an > FPGA. I want to build a data transmission system that compresses portions of > the incoming data (not the whole data but "frames" of like 800 bytes) > on-the-fly. In my search for a fast (i.e. real-time capable at a "desired" > data rate of - let's say - 300 MHZ?) "universal" compression scheme I came > across the following stepping stones: Wow 300 Mhz, what kind of fgpa are you targetting ? > - is there any free example code for compression algorithms available in > VHDL to get an overview and a first impression of implementation complexity? Not sure, open-cores.net or maybe some application note from Xilinx or Altera. > - as the input data is unknown the álgorithm must be lossless, reducing > redundancy (if possible), not irrelevancy. what are the theoretical limits > of "universal" compression, not emphazizing one particular statistical > property (like similar by values in succession)? There is no limit ... If the input stream is a "true random", with maximum entropy then you just can't compress it at all. But all real-life data have not a maximum entropy and some kind of redundancy can be found. But there is no guarantee .. > - what if my code gains different compression ratios for consecutive data > portions? surely a FIFO can decouple input and output rate but eventually > the FIFO will underflow? There must be some kind of "flow control" implemented in either your input / output or both. So that you core can for example says : "No more input data for the moment" or "Sorry, can't output more data for now, wait till I say otherwise". It can be just a single line like "in_rdy" to say you're ready for new data input or "out_valid". Ideally you would have handshake line on both port and a fifo so that your core can be inserted in any comm line. (just my 2 cents) SylvainArticle: 88814
Simon Peacock wrote: > well its an interesting follow on... the more cores.. the better the > floating point.. I think that's the aim here... but I'm surprised no one has > mentioned the cell... it has 9 processors on board. > > Simon > Well I will see the Cell architects at CPA2005. They will describe the Cell communications architecture to a group of people most interested in parallelism and some might be esp interested in FP performance too. My own view is the Cell is one way forward but not the way I would prefer. Its really one magnificent core surrounded by 8 lesser slaves (as much as I know). The gaming world may have a much harder time exploiting those slaves than IBM might wish, but then that would be true for almost any multi processor, I still remember driving (game) cars into and out of walls, their physics isn't all that good. In the end, I think the memory model comes 1st, the FPU and the rest follows on. JohnArticle: 88815
mk wrote: > On 28 Aug 2005 16:49:15 -0700, "JJ" <johnjakson@yahoo.com> wrote: > > > the problem with highly threaded cpus is that they are not very good > at running wordprocessors, spreadsheets and fpga p&r tools and that's > where most cpus are used so the two cpu developers put most of their As long as almost all favorite software languages don't support concurrency or even worse have a broken early 1960s lock based model (java,C#) then its a darn good job most apps are not multi threaded, they would all be broken as Java Swing was shown to be. But with languages like CSP/occam which can be added to C++,Java its much more practical to do so but there hasn't been any compelling reason to do this SO FAR. I hear that C++ may finally be getting some official concurrency added in coming round. ADA though has been on solid ground for 20yrs and coincidently shares some of Occams view of cooperating processes with its rendezvous. For office processing, it really doesn't take more than a few Mips to do the raw grunt work, its the GUIs and bloatware that is killing most SW performance today as well as the cpus memory systems that don't like anything but extreme locality of reference. Most applications like Word, OpenOffice are really databases with complex data structures that can not fit into any cache. They are not so different in principle from EDA databases, lots of hash tables, trees, lists, graphs, and all of these have poor locality of reference. Hash tables are the worst, they have completely random behaviour and are my favourite data structure (being associative). On paper a 20 (10ns) instruction hash entry actually takes 1000 cycles every time (if table is > cache). I think that folks who use SW but don't write it are living under the false illusion spread by Intel/AMD marketing that all their opcodes run at 2 or 3GHz and that with the wonders of Superscaler and Out of Order, they run several opcodes per cycle. In practice they hit the memory wall all the time. When locality is present, they are amazing though, and multimedia codecs are esp good at this. Programs that manage large databases esp such as those for VLSI and also FPGA P/R have no locality at all. If the database can't fit into L2 cache, the cpu is broken. I know of no suitable CS data structures that are extremely cache friendly that can substitute for those above and described by Knuth. Most CS data structures were imagined (in the 60s or earlier) when memory cycles were similar to processor cycles. The memory model I propose (at cpa2005) not only has a fairly flat access time accross its entire DRAM space, its issues about as fast as your typical L2 SRAM but has some latency and multi threads as its price. Would you rather have 1 thread cpu that lives in a cache prison of 1ns at 16K plus a 4ns 512K backyard plus >100ns nGB wasteland with always too few TLBs and missing more often than not. A 1% miss means 1ns + N*100ns/100 avg accesses. More than 1% misses means several ns avg L1,L2,DRAM accesses. A 3% miss rate probably means 250Mips of loads and stores plus a few times more for all the free opcodes inbetween that don't hit memory. Or would you rather have 4N threads all of which see something like a 2-4 instruction DRAM access even if threads are each much slower. In an 90nm ASIC process the FPGA design I describe could run each thread at peak 200-250Mips each. If 1 thread isn't enough to run OpenOffice, then time to find another package. The real limit to computing is memory throughput, period. Single threaded cpus make it worse by mostly serializing these through single memory management unit. Multithreaded cpus with same memory model make it even worse because the locality is divided further. But threaded processing with threaded memory can achieve far higher sustained memory bandwidth and thats really all that matters. After that its pretty easy to attach Mips to memory issues (but theres more to it than that). > money into developing slightly threaded architectures with full > multi-cores instead of smt. if you noticed the new multi-core i86 > implementations don't support ht anymore. another reason this idea is > not very easy to implement is that regardless what's happening with > power on 90nm and lower processes, speed still counts and embedding > dram on a high speed logic process is a big problem. if and when a new > memory structure comes out which can be embedded in logic process and > as inexpensive as 1t+1c dram, i am sure isa architects will look at > highly threaded cpus again but probably not before then. also keep in > mind that developing cpus is a very expensive endevour and anyone who > is not developing an x86 compatible one seems to be giving up Well I don't actually suggest any DRAM of any sort should be on die with cpu. What I do suggest is RLDRAM interface to off the shelf parts. Further I suggest smaller and larger models of the same threaded memory architecture, the smaller one in interleaved SRAM inside cpu at full cpu speed (ie a L1 cache again) and larger one for much slower DDR DRAM for lower cost. In effect a 3 level memory all threaded to give high level of associativity at different speed size points. No paging, no TLBs. Page size is only 32bytes though. > including intel. i expect sun will drop sparc pretty soon and intel > will drop itanium too; moving their itanium developers to xeon > projects doesn't bode too well which is for the better as we won't > have to deal with a completely proprietary, fully moated with patents > isa. I don't think Sun will be dropping Sparc at all exept the older models, Sparc in its Niagara form will serve their needs better than any x86, and Opterons for their other customers. Niagara and RMI MIPs are very similar to what I propose but they don't have the same memory model I suggest. Itaniums, I guess 50-50, don't really care. The world of comp arch is not as sterile as an Intel only desktop world would have us believe. The embedded space and the much smaller HPC (thank heavens) is entirely more appreciative of engineering. I for one don't believe x86 is as important as most believe, when you have been around 30yrs in the business, everything is old and tired and Windows is looking very tired. 99% of x86s get used solely for surfing and light office work and almost all of these are idle 98% of the time. If you take a WinCE toy and add a bit more RISC grunt and video output to it, what you still have is the familiar Windows but not on x86. Eventually people will tire of 100W heaters. The workstation model died because it tried to hitch a free ride on x86 coat tails. If you want to use a PC to do FPGA P/R that is barely good to surf the web or run bloatware, thats something we did to ourselves. I do believe that ASIC & FPGA EDA could benefit enormously by threading, its been done in ASIC EDA for years in some products. end of rant johnjakson at usa ... transputer2 at yahoo ...Article: 88816
> does any one know where to find some example projects for spartan 3 > starter board for digilent. > CMOS e.g. a Java Processor: http://www.jopdesign.com/ MartinArticle: 88817
In article <deuq9102apv@drn.newsguy.com>, Winfield Hill <Winfield_member@newsguy.com> wrote: [...] > The NSC datasheet note says, "minimum load capacitance of 0.01µF > to limit high frequency noise," which means the output noise is > not white, which means you can't perform the usual simple sqrt-BW > calculations to obtain the noise density. Most closed loop things have a noise that rises to a peak at the gain cross over point. This would apply to the LM7805 like devices. In the OP's case this band of noise is almost certain to alias to exactly where it causes the most trouble. At some output frequency, the peak near the crossover will alias near zero. When I had a situation very like the OPs, I argued like this: The noise is confined to a band with a 3dB width of about 1/4 the gain cross over and thus we can find the nV/sqrt(Hz) for the noise. (This was true for the noise spectrum I was dealing with.) At the frequency where the noise makes the most trouble, the noise spectrum is shifted near zero by aliasing. In this situation, the noise spectrum now appears flat. The gain of the aliasing process can be measured by injecting a signal near the frequency that is being aliased. The frequency noise in the PLL output is mostly above the gain cross over point of the PLL. The hardware after the PLL adds other band width limiting effects and also may further alias this to make the final results. -- -- kensmith@rahul.net forging knowledgeArticle: 88818
Paul Hartke <phartke@Stanford.EDU> wrote: > You are not using any Service Packs. I'd upgrade to both the latest EDK > and ISE service packs as a first step. Concerning <http://www.minet.uni-jena.de/~adi/xflow2.log> (with service packs) there isn't enough space to place everything. -- mail: adi@thur.de http://adi.thur.de PGP: v2-key via keyserver Du sollst am Tag nicht vor dem Abend tobenArticle: 88819
Yes, ISE7 indepth tutorial from xilinx.com has a complete project based on spartan 3 board.Article: 88820
Hi group, I've built an IO board [1] for an FPGA module to be used in a DSP lab at the university. The idea is to use USB for powering the board, transfer data between the PC and the FPGA and configure the FPGA. I'm using the FTDI FT2232C [2] USB chip - a really plug and play solution. Data transfer between the PC and a processor on the board (in my case JOP [3]) is 800KB+. The only thing that's left to do is the JTAG configuration of the Altera Cyclone and the MAX PLD with the FTPI chip. I know it should be easey. However, if someone has done this allready, I don't have to reinvent the wheel. [1] http://www.soc.tuwien.ac.at/courses/projects/dspio [2] http://www.ftdichip.com/FTProducts.htm [3] http://www.jopdesign.com/ MartinArticle: 88821
Hello Andrew, >>regulator output? Is 40nV/sqrt(Hz) at 1 KHz credible for a 78L05 >>with 100n + 10n hanging off its output? > > The 78L05 datasheet quotes an output noise voltage of 40uV for 10Hz <= f <= > 100 KHz with a minimum recommended load capacitance of 10n. I presume this > means 40uV peak-to-peak? I'm not sure how to convert this to RMS > nV/sqrt(Hz), but 40e-6/sqrt(100e3) = 1.26e-7 which is only 3 times my > figure. You could always build your own regulator or, as some say, filter the dickens out of it. Regards, Joerg http://www.analogconsultants.comArticle: 88822
What are you using to measure the phase noise? I'd hate for some troubles to be found there. The results you consider to be "poor" are with integer-N values to low comparison frequencies? Have you tried integer-N values with higher comparison frequencies? I've wanted to put together a nice Fractional-N synth for a while but haven't gotten around to it. One thing on my list is to have the phase compator external (high precision analog component) and have the final stage of the divided frequencies go through isolated single-gate registers clocked off the VCO and Oscillator so the divided clocks have no induced noise from my programmable device. If you insist on having the PFD in the CPLD, you will have edge noise injected by signals elsewhere on the FPGA, notable I/O that are nearby but also general functionality. You may be able to arrange the system so any package noise is minimized (Xilinx has had some recent discussions about cross-coupled noise in promoting their Virtex-4 parts). If the CPLD has different I/O banks, you might improve the situation by removing unrelated I/O from the I/O bank that contains the PFD. I hope you can demonstrate some nifty results. "Andrew Holme" <andrew@nospam.com> wrote in message news:desmbt$i07$1$8302bc10@news.demon.co.uk... > The dividers and the phase detector of my experimental frequency synthesizer > are implemented in a 15ns Altera MAX7000S CPLD. I've tried different > multiplication factors (kN) to see how the close-in phase noise varies. At > a 1 KHz offset, I get: > > -82 dBc/Hz for N=198 (VCO=19.8 MHz, comparison freq = 100 KHz) > -95 dBc/Hz for N=39 (VCO=19.5 MHz, comparison freq = 500 KHz) > > Calculating the equivalent phase noise at the PFD: > > -82-20*log10(198) = -128 dBc/Hz > -95-20*log10(39) = -127 dBc/Hz > > Given the 5:1 ratio of comparison frequencies, at a guess, I'd expect these > to differ by 13 dB if the noise was mainly due to a fixed amount of time > jitter at the PFD. > > I'm using a 10 MHz canned crystal oscillator, which I'm dividing down > (inside the CPLD) to obtain the reference frequencies. I've read these are > good for at least -130 dBc/Hz (before dividing down) so I'm a bit > dissappointed with my noise levels. Maybe it got a bit too hot when I > soldered it to the ground plane! I must try another.... > > Googling for "altera cpld jitter" doesn't turn-up much, and they don't > mention jitter in the datasheet. Does anyone know what sort of performance > can be expected from a CPLD in this regard? I don't know if the CPLD, or my > circuit lash-up is the root cause. > > A full write-up of the project can be found at > http://www.holmea.demon.co.uk/Frac2/Main.htm It has a fractional-N > capability, but noise-levels are the same in integer-N mode with the > external RAM disabled. > > Thanks, > Andrew.Article: 88823
Hi. I need to digitize an array of signals (24) with minimum 8-bit resolution, with < 2ms conversion time. Signals are single-ended 0 to 5V. I am trying to keep costs low, therefore I am trying to avoid multiple A/Ds and/or complex multiplexing situations. I know of the "slope" A/D technique of charging a capacitor or the sigma-delta technique of using a PWM DAC and a comparator to form an A/D. Would it be possible to get the speed I want using either of those techniques with an FPGA? Thank you. H.Article: 88824
John_H wrote: > What are you using to measure the phase noise? I'd hate for some > troubles to be found there. I'm using a Marconi 2382 spectrum analyzer. At current levels, I can see it no problem wthout any special test setup. If I get another 10 or 20 dB improvement, then I'd need a more sophisticated method. > The results you consider to be "poor" are with integer-N values to low > comparison frequencies? > Have you tried integer-N values with higher comparison frequencies? I've tried 100 KHz and 500 KHz comparison frequencies. Yesterday, I thought the phase noise might be due to the reference or the PFD, and when I estimated the dBc/Hz at the PFD, I thought the numbers were poor. Now, I'm not so sure the CPLD is the biggest problem. I think I may just need better power supply filtering after the monolithic regulators. > I've wanted to put together a nice Fractional-N synth for a while but > haven't gotten around to it. One thing on my list is to have the > phase compator external (high precision analog component) and have > the final stage of the divided frequencies go through isolated > single-gate registers clocked off the VCO and Oscillator so the > divided clocks have no induced noise from my programmable device. If > you insist on having the PFD in the CPLD, you will have edge noise > injected by signals elsewhere on the FPGA, notable I/O that are > nearby but also general functionality. You may be able to arrange > the system so any package noise is minimized (Xilinx has had some > recent discussions about cross-coupled noise in promoting their > Virtex-4 parts). If the CPLD has different I/O banks, you might > improve the situation by removing unrelated I/O from the I/O bank > that contains the PFD. I see what you're saying about implementing the PFD externally, and re-synchronising the divider outputs to the VCO and REF clocks. No doubt that would deliver the ultimate in performance. What sot of PFD did you have in mind?
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z