Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, sorry, I hit the wrong button. regards, BenjaminArticle: 83751
hello all, Could someone give me an advice of the best FPGA(s) that meets following requirements: Application: PCI board to be connected to an automotive bus (CAN, or LIN, or FlexRay etc...), an IP will be integrated for each of thoses busses (only 1 bus type per PCI board, several channels). The application must include: - a 5V PCI target core (32 bit, 33Mhz), how much does this cost? - the PCI core owns a Win98 (option: Linux) driver and an API to map the dual port RAM (ie: the application gets a pointer to the dual port RAM and use it as memory without any function to be called when accessing PCI hardware), how much does this cost? - a dual port RAM core (maybe with anti collision feature or boundary areas), with a size of at least 512kbit or a large FIFO, how much does this cost? What is the RAM size range that is possible to build in a FPGA? - the automotive bus IP core (I will code it) - a non BGA package because of higher production cost and hardware debug difficulties and low pin count (<150) - low price (very very small quantities), below $40 in single quantity (for one or 2 FPGA that will do the job), and low power consumption (ok, everyone ask for that!) - Xilinx or Altera because the FPGA community (like this newsgroup) know their product well (ie: good support) - free 16 or 32 bit processor core (or already integrated core like ARM or PowerPC or maybe LEON) with well documented free development tools (gcc...) or used by many open source projects, the firmware code will be stored in an external parallel Flash memory. I know that I should first code and then choose the best FPGA that fits the application but I must make a preselection in order to have more ideas about the board architecture, power consumption and price... Should I consider that everything fits in a 300kGate FPGA? Could someone tell me and approximate number of required Kgate for each block mentionned above? For more flexibility in the future (eg: available IC for processor or automotive bus), I've thought about using 2 FPGA on the board: - one for PCI interface, very large dual port RAM and a small FIFO (both with the same connexion to external local bus), this will be the same for every new board - one for the automotive bus controller, and the processor (and its RAM) that will have an external connection to the RAM inside the first FPGA, this board can be changed to use a fast microcontroller instead of processor core and/or to use an already existing CAN/LIN controller. Which FPGA would you recommend for both? Other question: Does someone know if there is an equivalent product table as http://www.xilinx.com/products/tables/fpga.htm but for Altera, Actel or Lattice?Article: 83752
Hi folks, this is starting to drive me nuts: I use a Xilinx Parallel Cable IV to program some big FPGAs (big as in V2P70 and the like). Now Impact and Chipscope always open this cable in "Compatibility Mode", meaning it is used as a Parallel Cable III and takes FOREVER to download a bitstream. No matter what I do, no matter what settings for the parallel port I use, it keeps getting detected as a Parallel Cable III. This is a problem we've been having constantly for years now... on some machines it works fine, on some it doesn't, and on some others it sometimes works but sometimes doesn't. The only certain thing is that it never works on the machine I am currently working on when I have a lot of testing to do and big bitfiles to download. Now, I've googled my eyes out for this, read all the answer records, tried everything mentioned there and everything I could think of: - I tried all possible BIOS-settings for the parallel port: ECP, EPP, bidirectional, different DMA-channels, different IRQs; you name it, I've tried it - I tried uninstalling the cable drivers from Xilinx, reinstalling new ones. I tried uninstalling the entire ISE, making sure there's nothing left in the Windows-Drivers-Dir, reinstalling ISE. I've tried every ISE-version from 4.2 to 7.1 without success. I've tried doing a fresh install on a freshly installed Windows, nothing. Now we bought one of the new platform cables for USB, works much better, but those are expensive and we have probably half a dozen "Parallel Cable IVs" in use. Any pointers, any hints? cu, SeanArticle: 83753
I've recently assisted implementing a pci slave in a Lattice EC device. That was 3.3v pci. The reference design on Lattice's website take ~ 1k LUTs. As far as the embedded ram, you're not going to get 500k in a non BGA package from anyone. You need to either use two devices or use a BGA. CAN controller should be less than 2k LUTs. Don't think Lattice has a free micro, but you can get 8051s & such from open cores. Here is the product table for the EC family. http://www.latticesemi.com/products/fpga/ecp/index.cfm Mouarf wrote: > hello all, > > Could someone give me an advice of the best FPGA(s) that meets following > requirements: > > Application: PCI board to be connected to an automotive bus (CAN, or > LIN, or FlexRay etc...), an IP will be integrated for each of thoses > busses (only 1 bus type per PCI board, several channels). > > The application must include: > - a 5V PCI target core (32 bit, 33Mhz), how much does this cost? > - the PCI core owns a Win98 (option: Linux) driver and an API to map the > dual port RAM (ie: the application gets a pointer to the dual port RAM > and use it as memory without any function to be called when accessing > PCI hardware), how much does this cost? > - a dual port RAM core (maybe with anti collision feature or boundary > areas), with a size of at least 512kbit or a large FIFO, how much does > this cost? What is the RAM size range that is possible to build in a FPGA? > - the automotive bus IP core (I will code it) > - a non BGA package because of higher production cost and hardware debug > difficulties and low pin count (<150) > - low price (very very small quantities), below $40 in single quantity > (for one or 2 FPGA that will do the job), and low power consumption (ok, > everyone ask for that!) > - Xilinx or Altera because the FPGA community (like this newsgroup) know > their product well (ie: good support) > - free 16 or 32 bit processor core (or already integrated core like ARM > or PowerPC or maybe LEON) with well documented free development tools > (gcc...) or used by many open source projects, the firmware code will > be stored in an external parallel Flash memory. > > I know that I should first code and then choose the best FPGA that fits > the application but I must make a preselection in order to have more > ideas about the board architecture, power consumption and price... > > Should I consider that everything fits in a 300kGate FPGA? > > Could someone tell me and approximate number of required Kgate for each > block mentionned above? > > For more flexibility in the future (eg: available IC for processor or > automotive bus), I've thought about using 2 FPGA on the board: > - one for PCI interface, very large dual port RAM and a small FIFO (both > with the same connexion to external local bus), this will be the same > for every new board > - one for the automotive bus controller, and the processor (and its RAM) > that will have an external connection to the RAM inside the first FPGA, > this board can be changed to use a fast microcontroller instead of > processor core and/or to use an already existing CAN/LIN controller. > > Which FPGA would you recommend for both? > > > Other question: > Does someone know if there is an equivalent product table as > http://www.xilinx.com/products/tables/fpga.htm but for Altera, Actel or > Lattice?Article: 83754
"Subroto Datta" <sdatta@altera.com> writes: Hi Subroto, > Chapter 9 of the MAX II handbook explains how to use the ALTUFM > megafunction to add UFM data with mif or hex file. The user has to > recompile if they want to change the hex file data, as this is how you > convert it to POF. > > This can be found at > http://www.altera.com/literature/hb/max2/max2_mii51010.pdf > (page 9-34 thru 9-38.) I've read the databook some time ago. I tried to simulate the UFM using $QUARTUS/eda/sim_lib/maxii_atoms.v. But this model does not seem to use the MIF file at all. The verilog model is using defparams to initialize the UFM contents during simulation and Quartus is using MIF to initialize the UFM during implementation. I was somewhat confused by this fact. Anyway, what I would like to do is to program the UFM during production. The MAX II is supposed to replace some older PLD's, a couple I2C PROM's and some other logic. The I2C PROM's are programmed during production and I was hoping I could to the same for the MAX II UFM. Is there a way I can generate a SVF file for the UFM only? If not I'll have to install and run Quartus for each card at the prodcution site, pregenerate POF files for all serial numbers and product variations, or make the UFM control signals available externally so I can write some software to program the UFM at the production site. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 83755
Teo a écrit : > I've recently assisted implementing a pci slave in a Lattice EC device. > That was 3.3v pci. The reference design on Lattice's website take ~ 1k > LUTs. > As far as the embedded ram, you're not going to get 500k in a non BGA > package > from anyone. You need to either use two devices or use a BGA. > CAN controller should be less than 2k LUTs. > Don't think Lattice has a free micro, but you can get 8051s & such from > open cores. Here is the product table for the EC family. > http://www.latticesemi.com/products/fpga/ecp/index.cfm OK, thanks for your feedback Do you know how was the PCI driver implemented? Are you talking about this PCI controller http://www.latticesemi.com/products/devtools/ip/pcitarget32/index.cfm ? My problem with opencore uC cores is that I did not find any suitable with powerfull development tool (compiler and debugger)...Article: 83756
Here is the free reference design for pci 32 bit slave. http://www.latticesemi.com/products/devtools/ip/refdesigns/pcitarget.cfm Not sure exactly what features you want in a compiler & debugger, but there are some out there like http://www.pjrc.com/tech/8051/Article: 83757
Mouarf, If you need a 512Kb fifo, check out the Stratix I and II parts from Altera. The M-RAMs are 576Kb. However, I don't know if $40 is going to possible, especially when you add in the cost for a programming PROM. Same is true for the Xilinx V2P or V4s. As for PCI cores, both Xilinx and Altera have cores that require an annual fee. Price depends on if it is a slave or master core. As far as I know Altera and Xilinx only support 3.3V PCI. Why do you need 5V? Finally, you're not going to get a free 16b or 32b microprocessor core unless you use an open source solution. That said, both Xilinx and Altera have IP you can purchase. I would expect the fees would be similar to a PCI core. JohnArticle: 83758
John M a écrit : > Mouarf, > > If you need a 512Kb fifo, check out the Stratix I and II parts from > Altera. The M-RAMs are 576Kb. However, I don't know if $40 is going > to possible, especially when you add in the cost for a programming > PROM. Same is true for the Xilinx V2P or V4s. As for PCI cores, both > Xilinx and Altera have cores that require an annual fee. Price depends > on if it is a slave or master core. As far as I know Altera and Xilinx > only support 3.3V PCI. Why do you need 5V? Finally, you're not going > to get a free 16b or 32b microprocessor core unless you use an open > source solution. That said, both Xilinx and Altera have IP you can > purchase. I would expect the fees would be similar to a PCI core. > > John > OK a PCI 3,3V would fit but do you have an idea about prices for IPs (PCI target, processor, CAN etc...)?Article: 83759
Benjamin Menküc wrote: > Hi, > >> When you are cascading DCM's you should bring out the second DCM out >> of reset after some delay from the time the first DCM is locked. There >> was an appnote which used 16 bit shift register to delay the 'locked' >> signal from the first DCM. That could be one of the problems. > > > I have to do that now, because simulation doesn't work if I connect the > inverted locked signal directly to the 2nd DCM. > > http://www.xilinx.com/xlnx/xil_ans_display.jsp?iCountryID=1&iLanguageID=1&getPagePath=19005&BV_SessionID=@@@@0453148829.1115333172@@@@&BV_EngineID=ccccaddeifmhfficflgcefldfhndfmo.0 > > > Here Xilinx says the same, however it seems that this is only neccessary > for simulation? Or is this any good for in hardware too? Don't know if its needed in hardware or not, in all probablity it is needed in hardware too. But once you have it why bother remove it? It just uses a single LUT. > > regards, > BenjaminArticle: 83760
Hi Brijesh, yes thanks. I leave it in there and don't bother about it :) regards, BenjaminArticle: 83761
OK, I have received three votes for the spreadsheet. That isn't going to convince anyone! I am sure there are more of you out there who would like the spreadsheet, but perhaps you are just not inclined to email me? Please don't clutter up the newsgroup, mail me directly at austin@xilinx.com. Austin Ray Andraka wrote: > Austin Lesea wrote: > >> How many folks out there want to have the local spreadsheet version >> for estimating? >> > I vote for a spreadsheet. Using the web thing to present power numbers > to a customer is a real PITA. >Article: 83762
I know that we can vary the drive strength of the Virtex outputs and control the rise time. This just for my understanding of things. Using a capacitor to ground to slow the rise time feels like a very wrong thing to do. But I am not able to convince others why its a bad practise? The reasons I can come up with are 1) It increases the total current and hence can contribute to ground bounce and cross talk. 2) Total power dissipation in the device is increased. 3) It takes up board space, requires more components etc.(for now lets stick to the electrical aspect of things and ignore this issue) I am unable convince that this practise is bad and a real issue. Is there something I am missing? BrijeshArticle: 83763
Hi, I have looked a little bit into the standard. The DVI uses a LVDS, with a 50 Ohm pullup to 3,3V on each line at the receiver. The min peak-to-peak is 150 mV and max peak-to-peak differential is 1560mV. Is there a IO standard in the VirtexIIPro that supports this TMDS link, when I use external pullups? The RocketIOs accept 175-2000mV differential input voltage, might that be the right choice for DVI? regards, BenjaminArticle: 83764
Brijesh, A cap directly at the output pin slows down the rise time, and is not all that bad. I say that only because it sometimes has to be done. It is a lot like a one-shot, you should be able to design everything without having to use one. Or, like a "GO TO" statement in a program, something to avoid, as it is considered bad practice. Placing the cap at the receiver is really bad from a signal integrity standpoint: it makes for a huge reflection that may lead to discontinuities in the rising and falling edges. (Much) Preferable to a cap load is to set the SLOW attribute, or use a weaker drive output. Austin Brijesh wrote: > > I know that we can vary the drive strength of the Virtex outputs and > control the rise time. This just for my understanding of things. > > Using a capacitor to ground to slow the rise time feels like a very > wrong thing to do. But I am not able to convince others why its a bad > practise? > > The reasons I can come up with are > 1) It increases the total current and hence can contribute to ground > bounce and cross talk. > 2) Total power dissipation in the device is increased. > > 3) It takes up board space, requires more components etc.(for now lets > stick to the electrical aspect of things and ignore this issue) > > I am unable convince that this practise is bad and a real issue. > > Is there something I am missing? > > BrijeshArticle: 83765
the differences between LVDS and TMDS are discussed here. http://www.national.com/nationaledge/may01/lvds.html regards, BenjaminArticle: 83766
John Adair wrote: > Libraries Guide is always a good place to start for component instantiation. > There are some component templates also available in ISE under the little > light style button. > > If you want a simple shift then doing it in VHDL or Verilog is also easy. If > write your code as the following style (VHDL shown), i.e. without a reset, > most synthesisers will turn it into SRL16 based logic. > > process(clk) > begin > if clk'event and clk='1' then > > shift1 <= input; > shift2 <= shift1; > ... > shift(n) <= shift(n-1); > end if; > end process; > Yikes! I guess I don't know why you would write a shift register that way. signal shift : std_logic_vector(n downto 0); process(clk) begin if rising_edge(clk) then shift <= shift(n-1 downto 0) & input; end if; end process;Article: 83767
"Brijesh" <brijesh_xyz@cfrsi_xyz.com> wrote in message news:d5g004$hui$1@solaris.cc.vt.edu... > > I know that we can vary the drive strength of the Virtex outputs and > control the rise time. This just for my understanding of things. > > Using a capacitor to ground to slow the rise time feels like a very > wrong thing to do. But I am not able to convince others why its a bad > practise? > > The reasons I can come up with are > 1) It increases the total current and hence can contribute to ground > bounce and cross talk. > 2) Total power dissipation in the device is increased. > > 3) It takes up board space, requires more components etc.(for now lets > stick to the electrical aspect of things and ignore this issue) > > I am unable convince that this practise is bad and a real issue. > > Is there something I am missing? > > Brijesh I haven't "gotten to a good place" yet with respect to slowing rise times but I agree that adding capacitors to the output isn't an ideal solution. What might work better (even more board space) is adding series resistors at the drivers and capacitors on the other side of the resistor from the driver. The wave shape is less ideal, perhaps, but the high edge rate transients traveling down the wire are significantly reduced without forcing the high near-rail currents at the start of the transition. For local point-to-point signals with a good ground reference, a series resistor is still probably your best bet because you don't have many mechanisms to be affected by high risetimes - EMI is still decent as long as you don't switch routing planes far from a return path or cross plane splits.Article: 83768
Antony wrote: > > > Hi and thanks for the new files, > worked on it again yesterday and now it still doesn't run properly but > at least now when I write a 16 bit value the system stalls (before it > couldn't write anything from 32 bit to 8 bit: I always got back 0), so > it's clear that something as changed (although I don't know if for the > better :) ). I'll see if finally I could make it work! > Well, I'll admit to only using with 32 bit values. There may very well be problems with other data widths, though I expect a fix would not be terribly difficult. That is definitely something that would be a lot easier to check in simulation rather than hardware.Article: 83769
Sean Durkin wrote: >Hi folks, > >this is starting to drive me nuts: > >I use a Xilinx Parallel Cable IV to program some big FPGAs (big as in >V2P70 and the like). Now Impact and Chipscope always open this cable in >"Compatibility Mode", meaning it is used as a Parallel Cable III and >takes FOREVER to download a bitstream. No matter what I do, no matter >what settings for the parallel port I use, it keeps getting detected as >a Parallel Cable III. This is a problem we've been having constantly for >years now... on some machines it works fine, on some it doesn't, and on >some others it sometimes works but sometimes doesn't. > >The only certain thing is that it never works on the machine I am >currently working on when I have a lot of testing to do and big bitfiles >to download. > >Now, I've googled my eyes out for this, read all the answer records, >tried everything mentioned there and everything I could think of: > >- I tried all possible BIOS-settings for the parallel port: ECP, EPP, >bidirectional, different DMA-channels, different IRQs; you name it, I've >tried it > >- I tried uninstalling the cable drivers from Xilinx, reinstalling new >ones. I tried uninstalling the entire ISE, making sure there's nothing >left in the Windows-Drivers-Dir, reinstalling ISE. I've tried every >ISE-version from 4.2 to 7.1 without success. I've tried doing a fresh >install on a freshly installed Windows, nothing. > >Now we bought one of the new platform cables for USB, works much better, >but those are expensive and we have probably half a dozen "Parallel >Cable IVs" in use. > >Any pointers, any hints? > >cu, >Sean > > Once we had problems with "Wigglers" for Motorola processors. It pointed out that it was related to cabeling. Use of a good printer cable and keeping the cable between the Wiggler and the development board short solved the problem. I bet you did not change the cables while trying. Regards, ThomasArticle: 83770
On Fri, 06 May 2005 10:50:54 -0400, Brijesh <brijesh_xyz@cfrsi_xyz.com> wrote: > >I know that we can vary the drive strength of the Virtex outputs and >control the rise time. This just for my understanding of things. > >Using a capacitor to ground to slow the rise time feels like a very >wrong thing to do. But I am not able to convince others why its a bad >practise? > >The reasons I can come up with are >1) It increases the total current and hence can contribute to ground >bounce and cross talk. >2) Total power dissipation in the device is increased. > >3) It takes up board space, requires more components etc.(for now lets >stick to the electrical aspect of things and ignore this issue) > >I am unable convince that this practise is bad and a real issue. > >Is there something I am missing? > >Brijesh I've avoided using capacitors on digital lines for the last 30 years. Actually, that's not quite true. About 10 years ago, another designer I was working with convinced me that we had a special situation in which a capacitor was called for. So now my policy is, every 20 years or so, go nuts. Now that I think of it, I've also suggested using RCs on I2C drivers that are too fast. But that's about it. And some folks might say that it's acceptable to filter non-speed-critical digital signals that enter or leave a system, for EMI purposes. I haven't found it necessary, but I won't argue the point. I avoid capacitors on digital lines because (1) the resulting risetime is usually poorly controlled, (2) the resulting additional signal delay is usually poorly controlled, (3) the capacitor is often used to cover up an underlying design problem, e.g., a glitchy decoder driving a synchronous input, and (4) I've always been able to find a better solution. I don't know how many people on this group read Joel Spolsky's "Joel on Software" blog. Spolsky has an interesting idea called the Joel Test, a list of 12 questions with yes/no answers that you can use to evaluate the quality of a software team. You can find it here. http://www.joelonsoftware.com/articles/fog0000000043.html I think it would be an interesting idea to come up with a similar list for digital hardware design. And now Brijesh has provided me with the inspiration to start such a list: 1) Do you use capacitors on digital lines? 2) Do you use analog one-shots in your designs? A "yes" answer to either question should be accompanied by a whole lot of 'splaining. One other thought: CPLD and FPGA vendors did the digital design community a great and lasting service by not including an "add a capacitor to this net" feature. Bob Perlman Cambrian Design WorksArticle: 83771
Make sure that your board has proper termination on the JTAG lines. Scope the voltages on the JTAG plug and make sure they are stable. Use the mouse PS2 port, not the keyboard one, for the cable power. Make sure you have updated chipset drivers. The odds are that your parallel port controller is made by Intel. I wouldn't run any version of ISE older than 6.2.3. I noticed some fixes in the 6.2 service packs. Make sure the device manager shows that it is using the ECP driver and set it to use an interrupt if one is available, which it should be. If all of the above appear correct, then recognize the truth: there is no spoon. The fact is that the DMA mode for the parallel IV stuff has never worked worth a crap for me. The impression I got from support when I called about it was that they get an aweful lot of calls on the topic. The fact that they charge $500 for their USB cable is BS. They do that because they have a monopoly on the item and use the frustration caused by their parallel cables to drive the market.Article: 83772
Hi all, I need to generate a 20MHz clock from a 10MHz clock on a V2Pro. Plan is to use 2 DCMs: 1st DCM: 10MHz into CLKIN Use CLKFX output with default of M = 4 and D = 1 to get 40MHz. I need to leave CLKFB unconnected because CLK0 and CLK2X will be below 24MHz. Also, since I am using CLKFX, I can leave CLKFB open since it is ok for CLKFX clock to be out of phase with the input 10MHz clock. 2nd DCM: 40MHz into CLKIN 20MHz out of CLKDV CLKFB gets CLK0. First question - will that work? Next question: The input 10MHz clock can be varied by +/- 25 parts per million to give a frequency offset. So, input period is (1/10e6) +/- 2.5ps. The +/-2.5 ps seems to be way less than the cycle and period jitter spec of the DCM so I am not worried about that. The input clock will be held at each offset for 20ms minimum. Also, there is 800us available for the DCMs to adjust to the new input frequency. I see that the lock time of the DCM is 10ms and I assume this is just at configuration time? So, am i right in thinking that my DCM cascade will track my freq offset when it is applied? Is there a spec somewhere that states how long it will take to adjust to the new input freq or is it instantaneous since the variance is only +/- 2.5ps? Thanks for your time guys, KenArticle: 83773
Hi Sean, Is there also a dongle on the port? Syms.Article: 83774
hi Ken, First question - no. tried almost the same thing on Virtex 2 (was locking and losing lock periodically). What you should do is to multiply by a larger number using CLKFX as now, and then used FFs to divide the clock down for feeding it to the next DCM stage. In this fashion, the output jitter on DCM1/CLK0 is divided as well to be suitable for the DCM2/CLKIN jitter tolerance. I tried, it works PERFECTO! What happens, is that DCM multiplies whatever jitter you have at your clock source plus adding some cycle-cycle jitter, due to its tapped delay line nature. I think there is some web page in Xilinx website or tools that calculates that output jitter per given input jitter. Regarding you 2nd question... I never saw anything specifiying the lock time in this case, however, i have noticed that the resets of DCMs cascaded in this way are to be held for much more than just 3/4/5 CLKIN cycles; exactly how much time, i cannot tell. but 20 ms is definitely sufficient. Hope this helps. Vladislav "Ken" <no@spam.com> wrote in message news:d5g6lp$lth$03$1@news.t-online.com... > Hi all, > > I need to generate a 20MHz clock from a 10MHz clock on a V2Pro. > > Plan is to use 2 DCMs: > > 1st DCM: > 10MHz into CLKIN > Use CLKFX output with default of M = 4 and D = 1 to get 40MHz. > I need to leave CLKFB unconnected because CLK0 and CLK2X will be below > 24MHz. > Also, since I am using CLKFX, I can leave CLKFB open since it is ok for > CLKFX clock to be out of phase with the input 10MHz clock. > > 2nd DCM: > 40MHz into CLKIN > 20MHz out of CLKDV > CLKFB gets CLK0. > > First question - will that work? > > Next question: > > The input 10MHz clock can be varied by +/- 25 parts per million to give a > frequency offset. > > So, input period is (1/10e6) +/- 2.5ps. > > The +/-2.5 ps seems to be way less than the cycle and period jitter spec > of the DCM so I am not worried about that. > > The input clock will be held at each offset for 20ms minimum. Also, there > is 800us available for the DCMs to adjust to the new input frequency. > > I see that the lock time of the DCM is 10ms and I assume this is just at > configuration time? > > So, am i right in thinking that my DCM cascade will track my freq offset > when it is applied? Is there a spec somewhere that states how long it > will take to adjust to the new input freq or is it instantaneous since the > variance is only +/- 2.5ps? > > Thanks for your time guys, > > Ken >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z