Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
lattice reply, proper setting for no config: CFG0=CFG1=INITN=0 PROGRAMN=1 (not clear if 1 or is enough to leave open?) re JTAGID=0000000 we have some device that WORKS but has ID 0 and can not be reprogrammed. hmm and some devices that do not work and get not be reprogrammed. maybe some OTP bit got flashed. there is nothing in datasheet about how JTAG ID reads if OTP is set, or if flash erase would fail or not if OTP set.Article: 152551
I was an FPGA engineer before and I think high performance computing based FPGA will lead to a bright future. However through my recently projects I found GPU will be more appropriate when there is a acceleration need. In embedded system, FPGA co-processing plan: Intel E6x5C and GPU co-processing plan: AMD APU (with opencl support) and in desktop system, FPGA co-processing plan: Full custom design, mostly will be based on PCIe fabric and GPU co-processing plan: nVidia CUDA (with opencv basically support) If I choose FPGA co-processing, the algorithm will be specifically optimized and R&D time will be very noticeable. If I choose GPU plan, algorithm migration will cost little time(even the original one is Matlab code), and the acceleration performance will also be quite well. As a conclusion, the FPGA acceleration only suits some certain and fixed application. However in the real world , many projects and many algorithms are very uncertain and arbitrary. With same power consumption, GPU plan may lead better results. For a concrete project, I will consider GPU or DSP, and FPGA at last. Do everybody agree?Article: 152552
vcar <hitsx@163.com> wrote: (snip) > As a conclusion, the FPGA acceleration only suits some certain and > fixed application. However in the real world , many projects and many > algorithms are very uncertain and arbitrary. With same power > consumption, GPU plan may lead better results. For a concrete > project, I will consider GPU or DSP, and FPGA at last. There have been companier over the years selling FPGA based accelerator hardware, but none have done very well. GPU acceleration takes advantage of the economy of scale for graphical uses. FPGAs have not been very good for floating point, especially for floating point addition. Some fixed point algorithms, such as dynamic programming and convolution of very large data sets can possibly take advantage of FPGA technology. One problem that I know of requires 5e19 fixed point adds per day, or about 6e14 per second. > Do everybody agree? I agree. -- glenArticle: 152553
If what you need is a computation off-load engine for a standard CPU, with that CPU handling all the I/O tasks, than using a GPU would probably be the most appropriate implementation methodology. However, the phrase "horses for courses" always applies. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152554
Hi, I have a DDR input which every now and then gives me a nonfunctional implem= entation due to unlucky input data clocking. Thought I would try to use the= variable delay input elements with the S control input, but I only got err= or message from map that I did not connect to IO as expected. I have instantiated the IOB_DLY_ADJ between the top level pin name recorded= in the ucf file and the input pin of the IDDR2 buffer in VHDL. D >-- (I)IOB_DLY_ADJ(O) --> (D)IDDR2(Q0/Q1) =3D> internal DDR Without the IOB_DLY_ADJ the design feeds data, but sometimes I am unlucky w= ith my timing.=20 In the ucf file I have NET "D" LOC =3D "AB2" |IOSTANDARD =3D "LVTTL" |IOBDELAY =3D "IFD"; According to the spartan3-hdl.pdf the delay buffer is supposed to be instan= tiated, but maybe for some reason, the original IBUF is still used. I have = not been able to find much more info on this in Xilinx docs. --=20 SvennArticle: 152555
Fellow Architects, At every computer conference I attend, I see numerous papers that show how to incrementally increase the capabilities of present products, plus a paper or two about some aspect of distant future processors. There is a sort of consistency among these papers that, taken together, creates an image of the manifest destiny of processors that are VERY different from present-day processors and networks. I am interested in that image, and I suspect that others here may also be interested. Here is the sort of image that I see emerging. Perhaps you have your own very different vision? 1. Processors would be able to automatically reconfigure around their defects with such great facility that reject components will be nearly eliminated. This would make it possible to build processors without any practical limits to complexity. Several papers have been presented explaining how this could be done with Genetic Algorithm (GA) approaches. Initial reconfiguring would be done at manufacture, but power-on reconfiguring would adapt to on-shelf and in-service failures. Processors with large numbers of defects would be sold as lesser performing processors. 2. An operating system would distribute the work as tasks, with each task having input and output vectors. Any task that fails to successfully complete would be re-executed on other sections of the processor while diagnostics identify the problem in the failed section, which would then be reconfigured around the new defect. This would allow systems to keep running and continue producing correct results, despite run-time failures. 3. Memory would be integral to the CPU, and would be in the form of thousands (or millions) of small memory banks that would eliminate the memory bus bottleneck. Switched memory buses could quickly move blocks of data around. 4. The processor would be organized as a small (2-4) number of CPUs, each having a large number of sub-processors capable of dynamic reconfiguration to specialize in the computation at hand. That reconfiguration would be capable of the extensive data-chaining needed to execute complex loops as single instructions, and do so in just a few machine cycles, after suitable setup. Sub-processors would probably be reconfigurable for either SIMD or MIMD operation. 5. The system would probably use asynchronous logic extensively, not only for its asynchronous capabilities, but also for its inherent ability to automatically recognize its own malfunctions and trigger reconfiguration. 6. A new language with APL-like semantics would allow programmers to state their wishes at a high enough level for compilers to determine the low-level method of execution that best matches the particular hardware that is available to execute it. 7. There are other items on this list, but they aren=92t as easy to explain, and they may not be essential to achieve the manifest destiny of processors. Note that the Billions of dollars now spent on developing GPU-based and large network-based processors, along with the software to run on them will have been WASTED as soon as Manifest Destiny processors become available. Further, the personnel who fail to quickly make the transition to Manifest Destiny processors will probably become permanently unemployed, as has happened at various past points of major architectural inflection. Apparently the only conference around with a sufficiently broad interest and attendance to host discussions at this level is WORLDCOMP. This would provide a peer reviewed avenue of legitimation for Manifest Destiny research. I have talked with Hamid, the General Chairman, about hosting these discussions, and he is OK with it, providing that I can drum up enough interest. So, I need to determine the level of interest out there in a more distant future of computing that lies beyond just the next product. Conferences aside, please email me or post your level of interest, and please pass this on to any others you know who might be interested. 1. I am interested as a lurker. 2. I am interested in participating in on-line discussions. 3. I am interested in attending a conference. 4. I am interested in presenting at a conference. 5. I am interested in chairing and/or helping any way I can. 6. I am a major player with some money to help advance this cause. Thanks for your help. Steve dot Richfield at gmail dot comArticle: 152556
In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote: > At every computer conference I attend, I see numerous papers that show > how to incrementally increase the capabilities of present products, > plus a paper or two about some aspect of distant future processors. > There is a sort of consistency among these papers that, taken > together, creates an image of the manifest destiny of processors that > are VERY different from present-day processors and networks. I am > interested in that image, and I suspect that others here may also be > interested. I am reading in comp.arch.fgpa, but comp.arch readers may have different ideas. > Here is the sort of image that I see emerging. Perhaps you have your > own very different vision? > 1. Processors would be able to automatically reconfigure around their > defects with such great facility that reject components will be nearly > eliminated. This would make it possible to build processors without > any practical limits to complexity. Several papers have been presented > explaining how this could be done with Genetic Algorithm (GA) > approaches. Initial reconfiguring would be done at manufacture, but > power-on reconfiguring would adapt to on-shelf and in-service > failures. Processors with large numbers of defects would be sold as > lesser performing processors. Reminds me of stories about Russian processors that came with a 'bad instruction' list the way disk drives (used to) come with a bad blocks list. If you follow such conferences, you necessarily get far-out ideas. But if you look at the actual processors in use today, they are not so different from 40 years ago. Bigger and faster, yes, but otherwise not that different. > 2. An operating system would distribute the work as tasks, with each > task having input and output vectors. Any task that fails to > successfully complete would be re-executed on other sections of the > processor while diagnostics identify the problem in the failed > section, which would then be reconfigured around the new defect. This > would allow systems to keep running and continue producing correct > results, despite run-time failures. I suppose there are some problems that could work that way. A web browser updating multiple windows on a page could farm out each to a different task. But many computational problems don't divide up that way. > 3. Memory would be integral to the CPU, and would be in the form of > thousands (or millions) of small memory banks that would eliminate the > memory bus bottleneck. Switched memory buses could quickly move blocks > of data around. > 4. The processor would be organized as a small (2-4) number of CPUs, > each having a large number of sub-processors capable of dynamic > reconfiguration to specialize in the computation at hand. That > reconfiguration would be capable of the extensive data-chaining needed > to execute complex loops as single instructions, and do so in just a > few machine cycles, after suitable setup. Sub-processors would > probably be reconfigurable for either SIMD or MIMD operation. Very few problems divide up that way. For those that do, static reconfiguration is usually the best choice. Dynamic reconfiguration is fun, but most often doesn't seem to work well with real problems. > 5. The system would probably use asynchronous logic extensively, not > only for its asynchronous capabilities, but also for its inherent > ability to automatically recognize its own malfunctions and trigger > reconfiguration. > 6. A new language with APL-like semantics would allow programmers to > state their wishes at a high enough level for compilers to determine > the low-level method of execution that best matches the particular > hardware that is available to execute it. APL hasn't been popular over the years, and it could have done most of this for a long time. On the other hand, you might look at the ZPL language. Not as high-level, but maybe more practical. > 7. There are other items on this list, but they aren???t as easy to > explain, and they may not be essential to achieve the manifest destiny > of processors. > Note that the Billions of dollars now spent on developing GPU-based > and large network-based processors, along with the software to run on > them will have been WASTED as soon as Manifest Destiny processors > become available. Further, the personnel who fail to quickly make the > transition to Manifest Destiny processors will probably become > permanently unemployed, as has happened at various past points of > major architectural inflection. Consider that direct decendant of the 35 year old Z80 are still very popular, among others in many calculators and controllers. New developments might be used for certain problems, but the old problems can be handled just fine with older processors. For many years now, the economy of scale of people buying faster processors to browse the web or run spreadsheets has supplied computational sciences (computational physics, computational chemistry, and computational biology) with cheap, fast machines. Machines that wouldn't have had sufficient economy of scale without those other uses. The whole idea behind GPU processors is that the economy of scale of building graphics engines for gamers can also be used for computational science. > Apparently the only conference around with a sufficiently broad > interest and attendance to host discussions at this level is > WORLDCOMP. This would provide a peer reviewed avenue of legitimation > for Manifest Destiny research. I have talked with Hamid, the General > Chairman, about hosting these discussions, and he is OK with it, > providing that I can drum up enough interest. So, I need to determine > the level of interest out there in a more distant future of computing > that lies beyond just the next product. Consider the latest deviation from traditional processor design, the VLIW Itanium. VLIW has been around for years, and never did very well. Some thought its time had come, but it is sinking just like the similarly named boat. > Conferences aside, please email me or post your level of interest, and > please pass this on to any others you know who might be interested. -- glenArticle: 152557
I think I mentioned this problem a year or so ago, but have new data. We previously had problems with whiskers shorting adjacent pins on some boards that have a Xilinx XC9572-15TQG100C part. These whiskers were laying flat on the board, so their origin was not completely clear. Now, I have some boards that were reflow soldered some months ago, and were only finished now. On inspection of the CPLD, clear evidence of Tin whisker growth is obvious. I think EVERY chip has whisker growth on at least one pin! This is quite a concern, as this equipment may have a 20 year operating life. There are 12 other fine-pitch parts on this board, and none of those show signs of the whiskers. I reported the first occurrence to Xilinx at the time, including microphotographs, and they basically blew me off, saying it was obviously my process. We are using tin-lead solder paste on tin-lead plated boards. Does anyone have any idea why we are experiencing this, or what can be done to prevent these chips from developing shorts over time? Thanks, JonArticle: 152558
Jon Elson <jmelson@wustl.edu> wrote: ... > I reported the first occurrence to Xilinx at the time, including > microphotographs, Please put the pictures on the web ... -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 152559
> If you follow such conferences, you necessarily get far-out ideas. > But if you look at the actual processors in use today, they > are not so different from 40 years ago. Bigger and faster, yes, > but otherwise not that different. Actually, if you look back at "far out research" from years ago, I think that even though the machines we use are "like the ones from back then" from a programming point of view, they are also in some ways "like the far-out ideas from back then". E.g. the "Processor In Memory" still hasn't happened, but current CPUs have a boat load of on-chip memory. So I think the way to predict the future is to take those far-out ideas and try to see "how will future engineers manage to use such techniques while still running x86/ARM code". After all, experience shows that the part that's harder to change is the software, despite its name. StefanArticle: 152560
On 09/14/2011 04:54 PM, Uwe Bonnes wrote: > Jon Elson<jmelson@wustl.edu> wrote: > ... >> I reported the first occurrence to Xilinx at the time, including >> microphotographs, > > Please put the pictures on the web > ... They are really crummy, and show the "old" problem, some whisker-like strands thay lay across the board. This new condition is different, and shows REALLY typical-looking tin whiskers that are growing out of the bends of the gull-wing leads on these Xilinx QFP parts. The last time I tried photographing this, I got very mediocre results, the stereo zoom microscope setup we have is optimized for hand rework of parts, and the light level decreases as you increase magnification. So, although I can see what is going on quite clearly, I doubt the pictures would be very definitive. But, I have NO doubt, whatsoever, that what I am seeing NOW matches the published tin whisker photos that are ubiquitous on the web. What has me worried is these are essentially new boards, just going through testing before sending out to researchers who will be using them for a number of years. If I saw this amount of whisker growth in the six months these boards have been in storage after reflow, it may indicate a LOT of problems in the future. It has definitely gotten me worried! (As for posting this as a reply to another thread, my first post as a new thread was rejected by some news server, but I could not discern the reason for the rejection.) JonArticle: 152561
On Sep 14, 4:08=A0pm, Stefan Monnier <monn...@iro.umontreal.ca> wrote: > E.g. the "Processor In Memory" still hasn't happened, but current CPUs > have a boat load of on-chip memory. =A0So I think the way to predict the > future is to take those far-out ideas and try to see "how will future > engineers manage to use such techniques while still running x86/ARM > code". =A0After all, experience shows that the part that's harder to > change is the software, despite its name. My main question concerning the original poster's projection of the logical future for processors is that reconfigurability comes with a large amount of overhead. So if leaving reconfigurability out improves speed by a factor of 3, say, it won't be popular. However, that's only true if the reconfigurability is fine-grained, as on an FPGA. On something like IBM's recent PowerPC chip with 18 CPUs, where one of them can be marked as bad, so that it uses one for supervision and 16 for work, there is almost no overhead. So, just as larger caches are the present-day form of memory on the chip, coarse-grained configurability will be the way to increase yields, if not the way to progress to that old idea of wafer-scale integration. (That was, of course, back in the days of three-inch wafers. Fitting an eight-inch wafer into a convenient consumer package, let alone dealing with its heat dissipation, hardly bears thinking about.) John SavardArticle: 152562
Has anyone been able to get Impact or Chipscope working on SL6.1/CentOS6/ RHEL6? It failed with the xsetup GUI but it only gave a useless error message that it failed in the log. When I tried to run the install script in LabTools/LabTools/bin/lin64/install_script/install_drivers I got a bunch of compile errors, apparently it's incompatible with a 2.6.32 kernel. I also couldn't find libusb-driver in 13.2, the most recent copy that I had was in 10.Article: 152563
On Sep 15, 10:11=A0am, Jon Elson <jmel...@wustl.edu> wrote: > =A0This new condition is different, and > shows REALLY typical-looking tin whiskers that are growing out of the > bends of the gull-wing leads on these Xilinx QFP parts. =A0 Interesting location, suggests stress helps ? Are these on both bends, & on the compression, or tension part of each ? Were these manually or reflow soldered ? At Lead-based, or Lead-free temperatures ? Post cleaned or not ? -jgArticle: 152564
On 09/15/2011 08:51 AM, General Schvantzkoph wrote: > Has anyone been able to get Impact or Chipscope working on SL6.1/CentOS6/ > RHEL6? > > It failed with the xsetup GUI but it only gave a useless error message > that it failed in the log. > > When I tried to run the install script in > > LabTools/LabTools/bin/lin64/install_script/install_drivers > > I got a bunch of compile errors, apparently it's incompatible with a > 2.6.32 kernel. > > I also couldn't find libusb-driver in 13.2, the most recent copy that I > had was in 10. > I had a similar problem getting my Xilinx USB-JTAG cable working on Fedora 13. I ended up using the open source Linux driver instead, works fine: http://rmdir.de/~michael/xilinx/ Steve Ecob Silicon On Inspiration Sydney AustraliaArticle: 152565
On Thu, 15 Sep 2011 12:42:21 +1000, Steve wrote: > On 09/15/2011 08:51 AM, General Schvantzkoph wrote: >> Has anyone been able to get Impact or Chipscope working on >> SL6.1/CentOS6/ RHEL6? >> >> It failed with the xsetup GUI but it only gave a useless error message >> that it failed in the log. >> >> When I tried to run the install script in >> >> LabTools/LabTools/bin/lin64/install_script/install_drivers >> >> I got a bunch of compile errors, apparently it's incompatible with a >> 2.6.32 kernel. >> >> I also couldn't find libusb-driver in 13.2, the most recent copy that I >> had was in 10. >> >> > I had a similar problem getting my Xilinx USB-JTAG cable working on > Fedora 13. I ended up using the open source Linux driver instead, works > fine: > > http://rmdir.de/~michael/xilinx/ > > Steve Ecob > Silicon On Inspiration > Sydney Australia I've already tried to build the libusb-driver but it won't build on SL6.1.Article: 152566
Steve Richfield wrote: > > 1. Processors would be able to automatically reconfigure around their > defects with such great facility that reject components will be nearly > eliminated. This would make it possible to build processors without > any practical limits to complexity. Several papers have been presented > explaining how this could be done with Genetic Algorithm (GA) > approaches. Initial reconfiguring would be done at manufacture, but > power-on reconfiguring would adapt to on-shelf and in-service > failures. Processors with large numbers of defects would be sold as > lesser performing processors. Defect density is hardly a limiting factor. Thermal and I/O are, both also being packaging and substrate issues. Also, it would introduce pain if different chips with the same part number, revision level, and date code had different performance. Probably no fun for the guys in the testing department, either. I'm reminded of a friend of mine that worked on binary code rehosting tools for Clipper. He'd rant and rave about all the hardware bugs being hidden by the assembler. When I told him that I learned from this newsgroup that yield was being enhanced by zapping individual bad cache lines to make them permanently invalid, he just laughed.Article: 152567
glen herrmannsfeldt wrote: > > In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote: > > > Note that the Billions of dollars now spent on developing GPU-based > > and large network-based processors, along with the software to run on > > them will have been WASTED as soon as Manifest Destiny processors > > become available. Further, the personnel who fail to quickly make the > > transition to Manifest Destiny processors will probably become > > permanently unemployed, as has happened at various past points of > > major architectural inflection. > > Consider that direct decendant of the 35 year old Z80 are still > very popular, among others in many calculators and controllers. > New developments might be used for certain problems, but the old > problems can be handled just fine with older processors. 8051 and PIC architecture hardware and software engineers are still gainfully employed, perhaps more now than ever before. Maybe he was referring to the 6502?Article: 152568
Quadibloc wrote: > > So, just as larger caches are the present-day form of memory on the > chip, coarse-grained configurability will be the way to increase > yields, if not the way to progress to that old idea of wafer-scale > integration. (That was, of course, back in the days of three-inch > wafers. Fitting an eight-inch wafer into a convenient consumer > package, let alone dealing with its heat dissipation, hardly bears > thinking about.) Oh, sure it does. Just have four of them on the top of the box, put it in the kitchen, and call it a stove.Article: 152569
In article <j4r508$rgl$1@speranza.aioe.org>, glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: >In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote: > >> 5. The system would probably use asynchronous logic extensively, not >> only for its asynchronous capabilities, but also for its inherent >> ability to automatically recognize its own malfunctions and trigger >> reconfiguration. This is a meaning of the term "asynchronous" with which I was previously unfamiliar. >> 6. A new language with APL-like semantics would allow programmers to >> state their wishes at a high enough level for compilers to determine >> the low-level method of execution that best matches the particular >> hardware that is available to execute it. > >APL hasn't been popular over the years, and it could have done >most of this for a long time. On the other hand, you might look >at the ZPL language. Not as high-level, but maybe more practical. Its status as the leading write-only language has been taken over by Perl; despite the claims of its proponents, it never was exceptionally useful for scientific calculations or anything else. Also, its model is a fair match to the computers that were being fantastised about in the 1980s, rather than the 2000s. I am very much in favour of people doing serious future thinking, but it would have to be a lot better-informed and hard-headed, and preferably more radical, than this. For example, there are people starting to think about genuinely unreliable computation, of the sort where you just have to live with ALL parths being unreliable. After all, we all use such a computer every day .... Regards, Nick Maclaren.Article: 152570
Is there any particular reason to compile your own libusb instead of using the distribution packages? To make the Xilinx JTAG cable working in the RHEL/CentOS/SL 6.x do the following stops. There is detailed description on my website http://www.sensor-to-image.cz/doku.php?id=3Deda:xilinx but unfortunately it is in Czech language only. Sorry. 1. Install and "fix" libusb: yum install libusb libusb1 fxload cd /usr/lib64 (or /usr/lib if you are running 32b system) ln -s libusb-1.0.so.0.0.0 libusb.so 2. "Fix" the Xilinx cable setup script <xilinx_install_dir>/ISE_DS/ISE/bin/lin64/setup_pcusb (or the same path with lin instead of lin64) which does not detect udev correctly: # Use udev always #TP_USE_UDEV=3D"0" #TP_UDEV_ENABLED=3D`ps -e | grep -c udevd` TP_USE_UDEV=3D"1" TP_UDEV_ENABLED=3D"1" 3. Run the script from its directory: cd <xilinx_install_dir>/ISE_DS/ISE/bin/lin64/setup_pcusb (or lin instead of lin64) ./setup_pcusb 4. Generated udev rule uses wrong syntax. The rule for current version of udev /etc/udev/rules.d/xusbdfwu.rules must look like this (long lines must be retained, see my website for proper formatting): # version 0003 ATTR{idVendor}=3D=3D"03fd", ATTR{idProduct}=3D=3D"0008", MODE=3D"666" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"0007", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusbdfw= u.hex -D $tempnode" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"0009", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xu= p.hex -D $tempnode" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"000d", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_em= b.hex -D $tempnode" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"000f", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xl= p.hex -D $tempnode" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"0013", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xp= 2.hex -D $tempnode" SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i= dProduct}=3D=3D"0015", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xs= e.hex -D $tempnode" 5. Connect/reconnect your cable, check dmesg, test iMPACT/ChipScope. Regards, Jan Sorry if the post appears twice. I had some problems posting the message. On Thu, 2011-09-15 at 03:02 +0000, General Schvantzkoph wrote: > On Thu, 15 Sep 2011 12:42:21 +1000, Steve wrote: >=20 > > On 09/15/2011 08:51 AM, General Schvantzkoph wrote: > >> Has anyone been able to get Impact or Chipscope working on > >> SL6.1/CentOS6/ RHEL6? > >> > >> It failed with the xsetup GUI but it only gave a useless error message > >> that it failed in the log. > >> > >> When I tried to run the install script in > >> > >> LabTools/LabTools/bin/lin64/install_script/install_drivers > >> > >> I got a bunch of compile errors, apparently it's incompatible with a > >> 2.6.32 kernel. > >> > >> I also couldn't find libusb-driver in 13.2, the most recent copy that = I > >> had was in 10. > >> > >> > > I had a similar problem getting my Xilinx USB-JTAG cable working on > > Fedora 13. I ended up using the open source Linux driver instead, work= s > > fine: > >=20 > > http://rmdir.de/~michael/xilinx/ > >=20 > > Steve Ecob > > Silicon On Inspiration > > Sydney Australia >=20 > I've already tried to build the libusb-driver but it won't build on SL6.1= .=20Article: 152571
Sir When we run our vhdl programme it sysnthesize and implemented witout any error, In place and route report it gives as: Generating Clock Report ************************** +---------------------+--------------+------+------+------------+-------------+ | Clock Net | Resource |Locked|Fanout|Net Skew(ns)|Max Delay(ns)| +---------------------+--------------+------+------+------------+-------------+ | clk_BUFGP | BUFGMUX0P| No | 1886 | 0.280 | 1.257 | +---------------------+--------------+------+------+------------+-------------+ | s_clk | BUFGMUX5S| No | 174 | 0.205 | 1.238 | +---------------------+--------------+------+------+------------+-------------+ | hyperdis/h_clk | BUFGMUX2P| No | 38 | 0.273 | 1.250 | +---------------------+--------------+------+------+------------+-------------+ | s_clk1 | Local| | 62 | 0.213 | 2.535 | +---------------------+--------------+------+------+------------+-------------+ | trpcnt_cmp_eq0000 | Local| | 18 | 0.000 | 1.360 | +---------------------+--------------+------+------+------------+-------------+ * Net Skew is the difference between the minimum and maximum routing only delays for the net. Note this is different from Clock Skew which is reported in TRCE timing report. Clock Skew is the difference between the minimum and maximum path delays which includes logic delays. Timing Score: 232989 INFO:Timing:2761 - N/A entries in the Constraints list may indicate that the constraint does not cover any paths or that it has no requested value. Asterisk (*) preceding a constraint indicates it was not met. This may be due to a setup or hold violation. ------------------------------------------------------------------------------------------------------ Constraint | Check | Worst Case | Best Case | Timing | Timing | | Slack | Achievable | Errors | Score ------------------------------------------------------------------------------------------------------ Autotimespec constraint for clock net clk | SETUP | N/A| 45.997ns| N/A| 0 _BUFGP | HOLD | 0.543ns| | 0| 0 ------------------------------------------------------------------------------------------------------ Autotimespec constraint for clock net hyp | SETUP | N/A| 20.484ns| N/A| 0 erdis/h_clk | HOLD | 0.658ns| | 0| 0 ------------------------------------------------------------------------------------------------------ * Autotimespec constraint for clock net s_c | SETUP | N/A| 11.313ns| N/A| 0 lk1 | HOLD | -2.837ns| | 124| 232989 ------------------------------------------------------------------------------------------------------ 1 constraint not met. INFO:Timing:2761 - N/A entries in the Constraints list may indicate that the constraint does not cover any paths or that it has no requested value. Generating Pad Report. All signals are completely routed. Sir I want to know what's meaning of 1 constraint not met and how I resolve it. Thanks Varun --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152572
Jon Elson <jmelson@wustl.edu> wrote: >I think I mentioned this problem a year or so ago, but have new data. >We previously had problems with whiskers shorting adjacent pins on some >boards that >have a Xilinx XC9572-15TQG100C part. These whiskers were laying flat on >the board, so their origin was not completely clear. > >Now, I have some boards that were reflow soldered some months ago, and >were only finished now. On inspection of the CPLD, clear evidence of >Tin whisker growth is obvious. I think EVERY chip has whisker growth >on at least one pin! This is quite a concern, as this equipment may >have a 20 year operating life. > >There are 12 other fine-pitch parts on this board, and none of those >show signs of the whiskers. > >I reported the first occurrence to Xilinx at the time, including >microphotographs, >and they basically blew me off, saying it was obviously my process. >We are using tin-lead solder paste on tin-lead plated boards. > >Does anyone have any idea why we are experiencing this, or what can be >done to prevent these chips from developing shorts over time? My guess is that you'll need to look at the temperature profile of the soldering process. I'd get some lead-free soldering experts to look at the problem. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------Article: 152573
In comp.arch.fpga nmm1@cam.ac.uk wrote: (snip) >>In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote: >>> 5. The system would probably use asynchronous logic extensively, not >>> only for its asynchronous capabilities, but also for its inherent >>> ability to automatically recognize its own malfunctions and trigger >>> reconfiguration. > This is a meaning of the term "asynchronous" with which I was > previously unfamiliar. It does seem a little unusual. Asynchronous logic, sometimes also known as self-timed logic, has been around for years. Some is described in: http://en.wikipedia.org/wiki/Asynchronous_logic I suppose I believe that some failure modes could be detected and a corrective action initiated. -- glenArticle: 152574
The OP has a duplicate thread at: http://forums.xilinx.com/t5/Timing-Analysis/Constraint-not-met-in-Place-and-Route-Report/m-p/177534 --------------------------------------- Posted through http://www.FPGARelated.com
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z