Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Mar 10, 2:16=A0am, Ale=9A Svetek <ales.sve...@gmDELail.com> wrote: > On 10/03/2011 03:45, MBodnar wrote: > > > > > On Mar 8, 5:58 pm, DaMunky89<shwankymu...@gmail.com> =A0wrote: > >> On Mar 3, 2:38 am, Ale Svetek<ales.sve...@gmDELail.com> =A0wrote: > > >>> On 03/03/2011 01:01, DaMunky89 wrote: > > >>>> Alright, so I'm trying to compile the example projects from xapp1026= , > >>>> following the instructions included in xapp1026.pdf: > >>>>www.xilinx.com/support/documentation/application.../xapp1026.pdf > > >>>> I'm using the Xilinx SDK, and successfully managed to import the > >>>> Hardware Platform Specifications and four xapp1026 example projects. > >>>> My current setup is as follows: > > >>>> OS: Windows 7 x64 > >>>> SDK Release Version: 12.3 Build SDK_MS3.70d > > >>>> When I try to "build all", I get the following error: > > >>>> **** Build of configuration Debug for project sock_apps **** > > >>>> make all > >>>> Building file: ../dispatch.c > >>>> Invoking: MicroBlaze gcc compiler > >>>> mb-gcc -Wall -O0 -g3 -c -fmessage-length=3D0 -mxl-soft-mul -MMD -MP = - > >>>> MF"dispatch.d" -MT"dispatch.d" -o"dispatch.o" "../dispatch.c" > >>>> ../dispatch.c:19:23: error: lwip/inet.h: No such file or directory > >>>> ../dispatch.c:20:26: error: lwip/ip_addr.h: No such file or director= y > >>>> ../dispatch.c: In function print_headers : > >>>> ../dispatch.c:27: warning: implicit declaration of function > >>>> xil_printf > >>>> ../dispatch.c:32: warning: implicit declaration of function > >>>> print_echo_app_header > >>>> ../dispatch.c:35: warning: implicit declaration of function > >>>> print_rxperf_app_header > >>>> ../dispatch.c:38: warning: implicit declaration of function > >>>> print_txperf_app_header > >>>> ../dispatch.c:41: warning: implicit declaration of function > >>>> print_tftp_app_header > >>>> ../dispatch.c:44: warning: implicit declaration of function > >>>> print_web_app_header > >>>> ../dispatch.c: In function launch_app_threads : > >>>> ../dispatch.c:60: warning: implicit declaration of function > >>>> sys_thread_new > >>>> ../dispatch.c:62: error: DEFAULT_THREAD_PRIO undeclared (first use > >>>> in this function) > >>>> ../dispatch.c:62: error: (Each undeclared identifier is reported onl= y > >>>> once > >>>> ../dispatch.c:62: error: for each function it appears in.) > >>>> make: *** [dispatch.o] Error 1 > > >>>> It looks like it's complaining that I haven't added lwip to the > >>>> include path. The xapp1026 instructions don't mention anything about > >>>> this, so I thought some kind of Xilinx distribution of lwip was > >>>> already included, but I guess not. Therefore, my question overall is= , > >>>> how do I go about adding lwip to my Xilinx SDK in such a way that I > >>>> can get these projects to build? If that isn't the solution to these > >>>> errors, what is? > > >>> It sounds like you didn't include lwip library in your BSP. It is not > >>> included by default. > > >>> You can add it by expanding your BSP in Project Explorer and double > >>> click *.mss file. After that click Modify this BSP's Settings and the= n > >>> include lwip130. You should also modify lwip's settings to suite your= needs. > > >>> ~Ale > > >> Isn't the BSP configured as part of the hardware specification in the > >> EDK? Xilinx provides a pre-written hardware specification exclusively > >> for use with the xapp1026 exercises. I don't see why they would have > >> neglected to include the completely necessary lwip packages in that.. > >> is there perhaps somewhere else I have to add these, as well? > > > I agree that the flow is kind of confusing, since "BSP" means > > different things to HW& =A0SW. =A0Xilinx has been trying to decouple th= e > > HW dev from the SW dev in more recent versions of EDK, reflected by > > XPS and SDK. =A0The BSP in the HW case refers what is physically > > available on the target platform. > > > The BSP you configure in the SDK is the software setup for the > > platform -- OS / stand-alone, file system, other libraries (including > > lwIP), etc. =A0All you need to do to get your application access to the > > lwIP library is to click the appropriate check box in the SW BSP in > > the SDK (as long as your hardware is configured correctly i.e. =A0a > > timer for the TCP callbacks has been included). > > > M > > Michael is completely correct regarding the BSP above. > > However, the compilation problem that you have been seeing is related to > the imported application projects, which are not configured correctly if > you just import the projects into SDK. Somehow the inferred options are > not picked up by the import. > > What you should do to get the compilation working is to manually specify > include folder in compiler settings and also specify libraries and their > search path in linker settings. > > More specifically, right click on the project, select "C/C++ build > settings", then on the left side select "C/C++ build" -> "Settings". > Under compiler settings select "Directories" and add the include path > from the BSP (or platform, as is it called in this XAPP). > > Then go to linker settings below and select "Libraries". At Libraries > (-l) field add "lwip4" and "xilkernel" and at Library search path point > to lib folder from the BSP (or platform, as is it called in this XAPP). > > I always configure my project manually so I didn't stumble upon this > problem before, but I just tried the import approach and managed to > compile the project with the additional steps as described above. > > HTH, > ~ Ales Thanks man, that sounds like it should work. I'll give it a try tomorrow when I'm doing more of this.Article: 151251
On Thursday, March 17, 2011 8:38:16 PM UTC-4, Aditi wrote: > Hi all, > > > I am working on a Spartan-6 FPGA. > > > I am using a DCM (DCM_CLKGEN Primitive) to generate some clock rates > that I want to. > > Here is how I declare the DCM > > > DCM_CLKGEN g723clk_dcm ( > > .CLKIN ( fpgaclk ), > .RST ( dcm_rst_g723 ), > .LOCKED (lock_val_g723), > .CLKFX ( codec_286m_dcm ), > .CLKFX180 ( codec_286m_180p) > > ); > > defparam g723clk_dcm.CLKIN_PERIOD = 50; > defparam g723clk_dcm.CLKFX_MULTIPLY = 143; > defparam g723clk_dcm.CLKFX_DIVIDE = 10; > > // synthesis attribute CLKIN_PERIOD of g723clk_dcm is 50; > // synthesis attribute CLKFX_MULTIPLY of g723clk_dcm is 143; > // synthesis attribute CLKFX_DIVIDE of g723clk_dcm is 10; > > > I am getting a warning like this > > > "The DCM, g723clk_dcm, has the attribute > DFS_OSCILLATOR_MODE not set to PHASE_FREQ_LOCK. No phase > relationship exists > between the input clock and CLKFX or CLKFX180 outputs of this DCM. > Data paths > between these clock domains must be constrained using FROM/TO > constraints." > > > Can somebody let me know what could be the reason for this warning? > > Am I doing any wrong in declaring the DCM? > > > Thanks, > > Regards, > > Aditi. In your case you can probably ignore the warning. The output clock should be considered asynchronous to the input clock. Given the fact that you asked for a frequency generator, this does not seem worthy of a warning. If however you want the output clock to line up with the input clock exactly every 143 cycles, then you should do something about the warning. If you're new to Xilinx FPGA's and their Core Generator and other IP offerings, you should get used to seeing warnings. Xilinx's approach is to warn about anything remotely interesting and then offer you a report viewer with a filter. -- GaborArticle: 151252
On Mar 17, 9:30=A0am, Kolja Sulimma <ksuli...@googlemail.com> wrote: > On 16 Mrz., 16:34, rickman <gnu...@gmail.com> wrote: > > > > > On Mar 14, 8:46=A0pm, Kolja Sulimma <ksuli...@googlemail.com> wrote: > > > > On 13 Mrz., 01:46, rickman <gnu...@gmail.com> wrote: > > > > > since a bad bitstream has potential of frying an FPGA. > > > > This argument is invalid. > > > > You can fry an FPGA with VDHL and vendor synthesis software. > > > This has been demonstrated at the FPL conference a decade ago. > > > =A0It doesn't matter if there are other ways > > to fry a part. =A0The point is that the vendors exert control over the > > design software so that they have control over this sort of problem. > > It doesn't matter if they prevent you 100% from doing damage to the > > chips. =A0They take responsibility if you are using their tools. > > But how would documenting the bitstream format make this issue worse? > I could still expect the vendor tools to be correct, don't I? > > Kolja If the vendor supports users messing about in the bit files in ways that the vendor has no control over, then they open themselves up to problems that can not only cost them in returned parts, it can lead to problems with their reputation. Sure, they can say all day long that the problem was a user creating a bogus bit file, but reputations can be ruined by less substantial events. There is no good reason for users to want the bit stream details. Xilinx pours tons of money into the tools. I doubt that an outside source would ever be able to come close to what they do. Clearly there is little market incentive for them to risk shooting themselves in the foot. RickArticle: 151253
On Mar 17, 1:11=A0pm, geobsd <geobsd...@gmail.com> wrote: > On Mar 17, 2:30=A0pm, Kolja Sulimma <ksuli...@googlemail.com> wrote:> But= how would documenting the bitstream format make this issue worse? > > I could still expect the vendor tools to be correct, don't I? > > Kolja it seem honesty is the problem, not only from buyers but also > from makers that can sell non-conform products without the fear of an > external certification tool ! If you are talking about the FPGA makers being afraid of users "certifying" their products, that is pretty absurd. If their product doesn't meet the spec, they can always "update" the spec to match. They do that all the time when their chips first ship. It's called "qualification". RickArticle: 151254
On Mar 17, 8:27 am, whygee <y...@yg.yg> wrote: > Hi ! > > - there is a difference between "i've seen it done once" and "i do it usually", > between "it's possible" and "it's common practice". Xilinx does allow partial > reconfiguration on the more expensive parts because the cheap parts go to > standard applications where reconfiguration is not considered, practical, > too much of a problem for little gain, etc. Actually, there is very little market justification for partial reconfiguration in general. That is why it has taken Xilinx so long to get it working. But from a user's perspective it is the low end parts where it would be most useful. The only real need for partial reconfiguration is to be able to use a smaller part. It would be the low end parts that target the most cost sensitive applications that would never be built if they can't meet their price target. The high end parts are using in apps with higher profit margins and so typically aren't so concerned with optimal costs. In particular, the Virtex parts which are the ones that support the full up partial reconfiguration, are not nearly as cost effective as the Spartans. I had an app for partial reconfiguration a long time back, about 10 years. Xilinx said it would be supported on the Spartan devices, but just not before my product became obsolete! It would have enabled much more flexible configuration. The original product had daughter cards with a small FPGA to interface each one to the DSP. The second generation product had a single FPGA and would have used partial configuration (didn't even need the "re") to allow each module to have a separate mini-bitstream to configure its interface within the FPGA. With four sites and potentially a dozen different daughter card types, the number of combinations was far too large to construct all of them. In the end the lack of partial configuration required the FPGA bitstream to become a custom part of the board for each customer. Not nearly as good a business model. > - so in practice, it seems to me that dynamic partial reconfiguration is a > nice, but unused and marginally useful feature, and only one of the many > benefits of FPGA. I can do without, and many others do. Sure, we are engineers > and solve real-life, industrial problems, we are not AI researchers :-) How is partial reconfiguration at all like AI? PR has great potential applications. Cypress has been selling it for years on their PSOCs which can cost as little as $1. Their programmable analog and digital sections allow reconfiguration on the fly so that the same circuitry can be a data logger for 23 hours and 55 minutes and then become a modem for 5 minutes to upload the data collected. In fact, today I saw a viewgraph showing that their PSOC1 devices have moved from somewhere way down the list into the top 10 or maybe the top 5 of CPUs sold. Not bad! Their new PSOC 3 and PSOC 5 devices have the same on the fly reconfigurability, but have much more powerful circuits. RickArticle: 151255
rickman <gnuarm@gmail.com> wrote: (snip) > If the vendor supports users messing about in the bit files in ways > that the vendor has no control over, then they open themselves up to > problems that can not only cost them in returned parts, it can lead to > problems with their reputation. Sure, they can say all day long that > the problem was a user creating a bogus bit file, but reputations can > be ruined by less substantial events. Yes, but someone could load random bits in and have the same problem. If they then return it, the result is the same. Besides, the vendor could provide a bit verifier that would verify that the file was legal, however it was generated. > There is no good reason for users to want the bit stream details. > Xilinx pours tons of money into the tools. I doubt that an outside > source would ever be able to come close to what they do. Clearly > there is little market incentive for them to risk shooting themselves > in the foot. As I said before, there is some argument for the LUT (ROM) bits. I suppose also for BRAM initialization, on devices that allow for that. (Again, ROMs programmed after P&R.) In the XC4000 days, I was considering designs that would also need to change bits on the carry logic. I wanted a preprogrammed, but after P&R, constant adder. Otherwise, it would be nice to allow for open source tools, but not really necessary. One can always do netlist generation for input to the vendor tools, and maybe more than that. -- glenArticle: 151256
On 18 Mrz., 06:32, rickman <gnu...@gmail.com> wrote: > > >They take responsibility if you are using their tools. > > But how would documenting the bitstream format make this issue worse? > > I could still expect the vendor tools to be correct, don't I? > If the vendor supports users messing about in the bit files in ways > that the vendor has no control over, then they open themselves up to > problems that can not only cost them in returned parts, it can lead to > problems with their reputation. =A0Sure, they can say all day long that > the problem was a user creating a bogus bit file, but reputations can > be ruined by less substantial events. > > There is no good reason for users to want the bit stream details. > Xilinx pours tons of money into the tools. =A0I doubt that an outside > source would ever be able to come close to what they do. =A0Clearly > there is little market incentive for them to risk shooting themselves > in the foot. I agree that there is little market incentive. Probably not even enough to make the cost for doing the documentation work. But I do not buy the other arguments. CPU manufactures get along well with documenting features that can damage chips. As do manufacturers of many products. Also, we already have established, that the tool control only covers some of the possible reasons to damage the chip. Those that can be covered could easily checked by a design rule check provided by tha manufacturer. Those that can't will be present in both scenarios. Also it is still possible to only document the parts that can not cause damage. I do not expect the FPGA to have many problematic bits in the bitstream. The need for many repeater buffers in interconnect probably has next to completely removed any interconnects with multiple drivers. KoljaArticle: 151257
hi all rick the point for external certification is to be able to know if the devices are fully "working" and if they are really as bought for the reputation makers only have to make certified products makers'softwares can work around bogus/non-conformance devices and who will know ? for the full spec not needed : it can be more needed as more users use it once again what is really strange is cpu are open spec while fixed in use and fpgas the inverse i hope fpgas will be open spec and more used than cpu in a near futurArticle: 151258
On Mar 18, 2:47=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > rickman <gnu...@gmail.com> wrote: > > (snip) > > > If the vendor supports users messing about in the bit files in ways > > that the vendor has no control over, then they open themselves up to > > problems that can not only cost them in returned parts, it can lead to > > problems with their reputation. =A0Sure, they can say all day long that > > the problem was a user creating a bogus bit file, but reputations can > > be ruined by less substantial events. > > Yes, but someone could load random bits in and have the same problem. =A0 > If they then return it, the result is the same. > > Besides, the vendor could provide a bit verifier that would > verify that the file was legal, however it was generated. You don't seem to be able to put on your FPGA vendor hat. They aren't worried about one chip. They are worried about situations where a *significant* customer is programming units in production and has problems. This has two problems. One is that it will create a problem for them when the chips are returned, not so much the cost, but the bother. The other is when they have to investigate and spend time figuring out what is wrong with the customer's operation. If unapproved third party software is in the loop, it makes things much harder for them and they are concerned it will be much more frequent. Yes, some software to verify that the bit file is good would be nice, but they don't see a need to go to the trouble. The issue is not that it is a lot of trouble, but that it requires more effort than they would get benefit from. In releasing documentation of the bit stream, what is the up side for an FPGA maker? > > There is no good reason for users to want the bit stream details. > > Xilinx pours tons of money into the tools. =A0I doubt that an outside > > source would ever be able to come close to what they do. =A0Clearly > > there is little market incentive for them to risk shooting themselves > > in the foot. > > As I said before, there is some argument for the LUT (ROM) bits. > I suppose also for BRAM initialization, on devices that allow > for that. =A0(Again, ROMs programmed after P&R.) That is all supported by vendor supplied tools. No need for third party tools. This is one of the areas where reverse engineering is so easy. So if there is a use for an open source tool to do this, why hasn't one been done at least to this extent? > In the XC4000 days, I was considering designs that would also > need to change bits on the carry logic. =A0I wanted a preprogrammed, > but after P&R, constant adder. =A0 > > Otherwise, it would be nice to allow for open source tools, > but not really necessary. =A0One can always do netlist generation > for input to the vendor tools, and maybe more than that. Bingo! Not necessary! FPGA vendors are about profit, just like all of us. They are already over 50% a software company rather than a hardware company (I was told this in terms of number of employees working on new designs). They don't want to add more burden to their software teams. I totally get that myself. RickArticle: 151259
On Mar 18, 8:44=A0am, geobsd <geobsd...@gmail.com> wrote: > hi all > > rick the point for external certification is to be able to know if the > devices are fully "working" and if they are really as bought > for the reputation makers only have to make certified products > makers'softwares can work around bogus/non-conformance devices and who > will know ? Since when are chips "certified" by anyone other than the maker? I've been doing this for over 30 years and I've never heard of anyone bothering to "certify" a device until after they have reason to believe there is a problem. When a vendor finds a problem they provide that info as "errata" and users are warned how to avoid or minimize the result. If a vendor gave me info on how to "certify" their parts, I think I would avoid them like the plague. I want THEM to certify their parts. > for the full spec not needed : it can be more needed as more users use > it > > once again what is really strange is cpu are open spec while fixed in > use and fpgas the inverse > i hope fpgas will be open spec and more used than cpu in a near futur I have no idea why you find this odd. CPUs are simple devices compared to FPGAs. A CPU executes instructions sequentially with a handful in the pipeline controlling a few thousand points at the max with only a much smaller number operating at the same time. In an FPGA there are literally millions of control points that are all operating at the same time. This is very much more complex and there are very, very few individuals who wish to deal with that level of complexity. Most of us just want to get our work done. I wish they would open it up too. But they don't and until someone gives them a reason to do so, they won't. BTW, you haven't given any clue as to how you would generate a bitstream if you had the spec. I think even needing to reverse engineer the bitstream, this would be the hard part. For starters, you might consider generating HDL or EDIF and running that through their tools to test your concepts. I am pretty confident you are years away from needing real time hardware to test on anyway. Instead of impulsively buying parts you can't use, think about the overall project and consider ways to your goals. If you eventually show promise, you may get an FPGA vendor to work with you. RickArticle: 151260
On Mar 18, 3:52=A0pm, rickman <gnu...@gmail.com> wrote: > Since when are chips "certified" by anyone other than the maker? =A0I've > been doing this for over 30 years and I've never heard of anyone > bothering to "certify" a device until after they have reason to > believe there is a problem. =A0When a vendor finds a problem they > provide that info as "errata" and users are warned how to avoid or > minimize the result. =A0If a vendor gave me info on how to "certify" > their parts, I think I would avoid them like the plague. =A0I want THEM > to certify their parts. a certification that only the seller can verify is untrustable > I have no idea why you find this odd. =A0CPUs are simple devices > compared to FPGAs. =A0A CPU executes instructions sequentially with a > handful in the pipeline controlling a few thousand points at the max > with only a much smaller number operating at the same time. =A0In an > FPGA there are literally millions of control points that are all > operating at the same time. =A0This is very much more complex and there > are very, very few individuals who wish to deal with that level of > complexity. =A0Most of us just want to get our work done. the complexity have nothing to do for this lot of peoples use pcs without knowing how it work with fpgas becoming better and better cpus will be uselless when it come more and more peoples will complain as i do free-programming is the futur > > > BTW, you haven't given any clue as to how you would generate a > bitstream if you had the spec. =A0I think even needing to reverse > engineer the bitstream, this would be the hard part. if i had spec chunks to replace neon or elses instructions are assembled by my scheduler (bye-bye context switch) for the AI project, where dynamic bit-stream is a high need, depending on the AI : generated by cpu or fpga(s) with only the limit of the bit-stream for my model as constraint > you might consider generating HDL or EDIF and running that through > their tools to test your concepts. =A0I am pretty confident you are > years away from needing real time hardware to test on anyway. =A0Instead > of impulsively buying parts you can't use, think about the overall > project and consider ways to your goals. =A0If you eventually show > promise, you may get an FPGA vendor to work with you. whygee said me almost the same ;) but too late i bought 5 spartan 3E anyway for chunks hdl+little work is ok for the AI hdl is impossible, i will not live 1000K millenium ! fpgas are the basis of it i'll find a way ;) > thanks RickArticle: 151261
On Mar 17, 12:41=A0pm, "soonph87" <soonph87@n_o_s_p_a_m.hotmail.com> wrote: > Hello, I am having a problem using DMA. When I run my code in Altera DE2 > Board, the program stucks at "while (!txrx_done)". It seems like it has > infinite loop. I dont know what happen. Please assist. Thank you in > advance! > > my code: > > #include <stdio.h> > #include <stdlib.h> > #include <stddef.h> > #include <string.h> > > #include <system.h> > #include <io.h> > > #include <alt_types.h> > #include "sys/alt_dma.h" > #include "sys/alt_cache.h" > #include "sys/alt_alarm.h" > #include "alt_types.h" > > static volatile int txrx_done =3D 0; > > //callback function when DMA transfer done > static void txrxDone(void * handle, void * data) > { > txrx_done =3D 1;} > > void initMEM(int base_addr,int len) > { > for (int i=3D0;i<len;i++) > { > IOWR_8DIRECT(base_addr,i,i);} > } > > int main() > { > printf("testing ssram & sdram : dma operation\n"); > > alt_16 buffer[10]; > //memset((void *)SSRAM_0_BASE,0x7a,0x10);//this write base on byte > initMEM(SDRAM_1_BASE,0x10); > memset((void *)(SDRAM_1_BASE+0x10),0x33,0x10); > > printf("content of sdram_1:before DMA operation\n"); > for (int i=3D0;i<0x10;i++) > { > printf("%d: %x\n",i,IORD_8DIRECT(SDRAM_1_BASE,i)); > > } > > printf("content of sdram_1(offset 0x10):before DMA operation\n"); > for (int i=3D0;i<0x10;i++) > { > printf("%d: %x\n",i,IORD_8DIRECT(SDRAM_1_BASE+0x10,i)); > > } > > int rc; //request > alt_dma_txchan txchan; > alt_dma_rxchan rxchan; > > void* tx_data =3D (void*)SDRAM_1_BASE; /* pointer to data to send */ > void* rx_buffer =3D (void*)(SDRAM_1_BASE+0x10); /* pointer to rx buffer *= / > > /* Create the transmit channel */ > if ((txchan =3D alt_dma_txchan_open("/dev/dma_0")) =3D=3D NULL) > { > printf ("Failed to open transmit channel\n"); > exit (1); > > } > > /* Create the receive channel */ > if ((rxchan =3D alt_dma_rxchan_open("/dev/dma_0")) =3D=3D NULL) > { > printf ("Failed to open receive channel\n"); > exit (1); > > } > > /* Post the transmit request */ > if ((rc =3D alt_dma_txchan_send (txchan, > tx_data, > 0x10, > NULL, > NULL)) < 0) > { > printf ("Failed to post transmit request, reason =3D %i\n", rc); > //exit (1); > > } > > /* Post the receive request */ > if ((rc =3D alt_dma_rxchan_prepare (rxchan, > rx_buffer, > 0x10, > txrxDone, > NULL)) < 0) > { > printf ("Failed to post read request, reason =3D %i\n", rc); > //exit (1); > > } > > /* wait for transfer to complete */ > while (!txrx_done); > printf ("Transfer successful!\n"); > > printf("content of sdram_1:after DMA operation\n"); > for (int i=3D0;i<0x10;i++) > { > printf("%d: %x\n",i,IORD_8DIRECT(SDRAM_1_BASE,i)); > > } > > printf("content of sdram_1(offset 0x10):after DMA operation\n"); > for (int i=3D0;i<0x10;i++) > { > printf("%d: %x\n",i,IORD_8DIRECT(SDRAM_1_BASE+0x10,i)); > > } > > return 0; > > } > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com To quote every embedded programmer in the world: "That bug is a hardware problem." OK, paraphrase, not quote. Mea culpa. RKArticle: 151262
rickman wrote: > BTW, you haven't given any clue as to how you would generate a > bitstream if you had the spec. This opens an interesting line of thought. Since FPGAs are flexible, why not emulate an FPGA-inside-an-FPGA? Much like vmware emulates a CPU on a CPU, one could model an (ideal and minimalistic) FPGA architecture and use the vendor tools once (!) to implement it for a particular chip. Then all (virtual) bitstream details are known! Even better, if any problems or shortcommings become visible during the development the custom toolchain, the bitstream format and features can be tweaked to provide a better fit. With a cleverly chosen virtual architecture, a design could run with relatively little overhead compared to a native bitstream. I'd say that anything "less than a magnitude slower" qualifies as good enough. It would make a modern part an equal or better bang for buck compared to older chips with officially documented bitstreams (like stone age Xilinx or Atmel AT40K for example). It is not only a working solution for the GPs problem. It would also be a great (and necessary!) research vehicle for the HUGE undertaking of developping a custom toolchain. After all, any particular architecture will turn obsolete anyway before the project can finish successfully and being able to switch architectures is valuable. Once the toolchain works, one can still add a backend for native bitstreams. I'm sure either Xilinx themselves or bitstream hackers will deliver the necessary details once the project is a tangible reality (rather than trollish comments). Best regards, MarcArticle: 151263
On Mar 18, 6:18=A0pm, Marc Jet <jetm...@hotmail.com> wrote: > rickman wrote: > > BTW, you haven't given any clue as to how you would generate a > > bitstream if you had the spec. > > This opens an interesting line of thought. > > Since FPGAs are flexible, why not emulate an FPGA-inside-an-FPGA? why not but everything in the real fpga can't be used so > > Much like vmware emulates a CPU on a CPU, one could model an (ideal > and minimalistic) FPGA architecture and use the vendor tools once (!) i doubt the vendor tool work for an unknow fpga model > to implement it for a particular chip. =A0Then all (virtual) bitstream > details are known! =A0Even better, if any problems or shortcommings > become visible during the development the custom toolchain, the > bitstream format and features can be tweaked to provide a better fit. it will take longer than to only find the bit-stream spec for a real model > > It is not only a working solution for the GPs problem. =A0It would also > be a great (and necessary!) research vehicle for the HUGE undertaking > of developping a custom toolchain. =A0After all, any particular > architecture will turn obsolete anyway before the project can finish > successfully and being able to switch architectures is valuable. why switch as you only can fully change the bit-stream spec or it doesn't stay an fpga (arch !=3D model) > > Once the toolchain works, one can still add a backend for native > bitstreams. =A0I'm sure either Xilinx themselves or bitstream hackers > will deliver the necessary details once the project is a tangible > reality (rather than trollish comments). > well i complain since i opened this thread to not have the bit-stream spec so welcome to the club ! i didn't saw really trollish comments here !?! > Best regards, thanks MarcArticle: 151264
I've developed a NIOS project some years ago for a client and with the free web edition I could create only encrypted VHDL files with the SOPC and a time limited SOF file with Quartus, which worked as long as JTAG was connected to a PC (and I think there was one hour time limit) and later the client used his licence to create the final SOF file. Yesterday I've downloaded the latest version of Quartus and NIOS II EDS and now it looks like the SOPC creates normal and unencrpyted VHDL files for the whole NIOS CPU (the 6000 lines of VHDL code for the CPU looks like the full instruction decoding logic etc.) and Quartus creates unlimited SOF files. At least I was able to create a blinking LED project and after uploading it to my old T-Rex board, it worked for hours without the USB connection. So technically there is no reason to buy the IP-NIOS anymore, but of course, I'm a good boy and I would buy IP-NIOS, if I would need it for another project, but why did Altera made it so easy compared to the encrypted and time limited process before to steal the IP? -- Frank Buss, http://www.frank-buss.de piano and more: http://www.youtube.com/user/frankbussArticle: 151265
Hi Chris, Thanks for your response! > >If you want something more reliable, try "strace -e trace=process -o >somefile -ff mycommand", it will run "mycommand" and dump into >"somefile.<pid>" all the forks, execs, and exits done by the command >and all its children (there will be a separate file for the syscalls >done by each process). You now guarantee to capture every command line. > I used ps/pstree as outlined in http://stackoverflow.com/questions/5213973/walking-a-process-tree/5311362#5311362 ... I thought of strace - but I was not aware of the hint you outlined above; so in the end I dropped strace... Awesome tip in any case - thanks, Cheers! --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151266
Hi all, I am speculating about starting an FPGA based project soon, which may need to use RAM. So I'd like to leverage the (relative) abundance (and thus, hopefully low price) of desktop (or laptop) PC memory modules for this purpose - and I was wandering if the community had any comments or suggestions. Primary points of interest are: * What is, in your opinion, the cheapest type of DIMM (desktop?) and SO-DIMM (laptop?) modules currently available (2010-2011)? From a quick scan, I can find ** SO DIMM 200-PIN, 1x512 MB module (DDR2 SDRAM) 667 MHz ** DIMM 240-PIN, 1x1GB module (DDR3 SDRAM) 1066 MHz .. for approx the same price ( below 13 Euro per GB). I'm aware this may be location dependent - but would the above represent the (currently) cheapest/most abundant modules on the market? If not, what would you consider cheapest/most abundant - and what would be a good reference (website) to consult? * Do the PCB sockets for the diverse module types differ significantly in price (maybe SO-DIMM sockets usually cost twice as much as DIMM?) Also, any (negative) experiences in soldering any of these by hand? * When they say stuff like "DDR2 SO-DIMM memory modules commonly have clock speeds from 200 MHz up to 800 MHz" (http://en.wikipedia.org/wiki/SO-DIMM), I'd assume they talk of max frequencies - does that still mean, that I can clock the modules with _less_ of a frequency (say 50 MHz)? * Given that, say, "DDR2 is neither forward nor backward compatible with either DDR or DDR3" (http://en.wikipedia.org/wiki/DDR2_SDRAM), obviously there is a need for dedicated hardware signaling interface for each type of RAM. Is there something like a 'base interface' (say, maybe something like SPI), which would be relatively easy to use, and that all RAM modules would support (at the expense of reaching top speeds)? * Assuming that there (most likely) isn't such a 'base interface' for all RAM types, what (of the currently cheapest and most available types of modules) would you feel is easiest to learn to interface with? I'd also love to hear any other considerations in this type of usage that I may have missed - as well as any links to tutorials/previous projects using FPGA and these types of RAM for PCs.. Looking forward to any responses - thanks in advance, Cheers! --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151267
sdaau wrote: > > * What is, in your opinion, the cheapest type of DIMM (desktop?) and > SO-DIMM (laptop?) modules currently available (2010-2011)? > what would be a good reference > (website) to consult? dealram.comArticle: 151268
>Hi all, > >I am speculating about starting an FPGA based project soon, which may need >to use RAM. So I'd like to leverage the (relative) abundance (and thus, >hopefully low price) of desktop (or laptop) PC memory modules for this >purpose - and I was wandering if the community had any comments or >suggestions. > >Primary points of interest are: > >* What is, in your opinion, the cheapest type of DIMM (desktop?) and >SO-DIMM (laptop?) modules currently available (2010-2011)? From a quick >scan, I can find >** SO DIMM 200-PIN, 1x512 MB module (DDR2 SDRAM) 667 MHz >** DIMM 240-PIN, 1x1GB module (DDR3 SDRAM) 1066 MHz >.. for approx the same price ( below 13 Euro per GB). I'm aware this may >be location dependent - but would the above represent the (currently) >cheapest/most abundant modules on the market? If not, what would you >consider cheapest/most abundant - and what would be a good reference >(website) to consult? > >* Do the PCB sockets for the diverse module types differ significantly in >price (maybe SO-DIMM sockets usually cost twice as much as DIMM?) Also, any >(negative) experiences in soldering any of these by hand? > >* When they say stuff like "DDR2 SO-DIMM memory modules commonly have clock >speeds from 200 MHz up to 800 MHz" (http://en.wikipedia.org/wiki/SO-DIMM), >I'd assume they talk of max frequencies - does that still mean, that I can >clock the modules with _less_ of a frequency (say 50 MHz)? > >* Given that, say, "DDR2 is neither forward nor backward compatible with >either DDR or DDR3" (http://en.wikipedia.org/wiki/DDR2_SDRAM), obviously >there is a need for dedicated hardware signaling interface for each type of >RAM. Is there something like a 'base interface' (say, maybe something like >SPI), which would be relatively easy to use, and that all RAM modules would >support (at the expense of reaching top speeds)? > >* Assuming that there (most likely) isn't such a 'base interface' for all >RAM types, what (of the currently cheapest and most available types of >modules) would you feel is easiest to learn to interface with? > >I'd also love to hear any other considerations in this type of usage that I >may have missed - as well as any links to tutorials/previous projects using >FPGA and these types of RAM for PCs.. > > >Looking forward to any responses - thanks in advance, >Cheers! > > > >--------------------------------------- >Posted through http://www.FPGARelated.com > If you want to start learning how to use DDR, DDR2 or DDR3 memory then you are probably best to buy a cheap devlopment board. This way you dont have to worry to start with about the pcb layout which can be quite tricky for a novice. The interface for all three DDR memories is similar but not the same. The newer the memory technology the faster they can be clocked, and the faster the clocking the harder the design. I believe you can clock them slower by disabling the PLL but its not something I have done. Xilinx has a product called MIG that will generate a memory controller for you but you still really need to know how the memory operates. So I would pick a Micron data sheet or find a tutorial and read that. Jon --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151269
On Mar 19, 10:03 am, "sdaau" <sd@n_o_s_p_a_m.n_o_s_p_a_m.imi.aau.dk> wrote: > Hi all, > > I am speculating about starting an FPGA based project soon, which may need > to use RAM. So I'd like to leverage the (relative) abundance (and thus, > hopefully low price) of desktop (or laptop) PC memory modules for this > purpose - and I was wandering if the community had any comments or > suggestions. > > Primary points of interest are: > > * What is, in your opinion, the cheapest type of DIMM (desktop?) and > SO-DIMM (laptop?) modules currently available (2010-2011)? From a quick > scan, I can find > ** SO DIMM 200-PIN, 1x512 MB module (DDR2 SDRAM) 667 MHz > ** DIMM 240-PIN, 1x1GB module (DDR3 SDRAM) 1066 MHz > .. for approx the same price ( below 13 Euro per GB). I'm aware this may > be location dependent - but would the above represent the (currently) > cheapest/most abundant modules on the market? If not, what would you > consider cheapest/most abundant - and what would be a good reference > (website) to consult? > > * Do the PCB sockets for the diverse module types differ significantly in > price (maybe SO-DIMM sockets usually cost twice as much as DIMM?) Also, any > (negative) experiences in soldering any of these by hand? > > * When they say stuff like "DDR2 SO-DIMM memory modules commonly have clock > speeds from 200 MHz up to 800 MHz" (http://en.wikipedia.org/wiki/SO-DIMM), > I'd assume they talk of max frequencies - does that still mean, that I can > clock the modules with _less_ of a frequency (say 50 MHz)? > > * Given that, say, "DDR2 is neither forward nor backward compatible with > either DDR or DDR3" (http://en.wikipedia.org/wiki/DDR2_SDRAM), obviously > there is a need for dedicated hardware signaling interface for each type of > RAM. Is there something like a 'base interface' (say, maybe something like > SPI), which would be relatively easy to use, and that all RAM modules would > support (at the expense of reaching top speeds)? > > * Assuming that there (most likely) isn't such a 'base interface' for all > RAM types, what (of the currently cheapest and most available types of > modules) would you feel is easiest to learn to interface with? > > I'd also love to hear any other considerations in this type of usage that I > may have missed - as well as any links to tutorials/previous projects using > FPGA and these types of RAM for PCs.. > > Looking forward to any responses - thanks in advance, > Cheers! Based on some of the questions you are asking, it appears that you are a newbie to working with DRAM. Each version of DRAM has its own characteristics, always optimized for transfer speed. SDRAM transfers one word on each clock cycle. DDR SDRAM transfers two words on each clock cycle. DDR2 transfers four words on each clock cycle and I don't know for sure, but I expect DDR3 transfers eight words on each clock cycle. There are also electrical differences because typical TTL/CMOS levels just don't cut it anymore at the rates data moves in and out of these devices. I don't think you are going to find good information for a newbie in using SDRAM modules. I recall that I looked once and didn't find the info in data sheets and such like I could find for individual RAM data sheets. So for starters, you might want to use RAM chips and not modules. Also for starters, I would suggest that you work with SDRAM instead of the more complex faster devices. Once you have SDRAM under your belt, the others will not be such a large step. RickArticle: 151270
"rickman" <gnuarm@gmail.com> a écrit: > Based on some of the questions you are asking, it appears that you are > a newbie to working with DRAM. Each version of DRAM has its own > characteristics, always optimized for transfer speed. SDRAM transfers > one word on each clock cycle. DDR SDRAM transfers two words on each > clock cycle. DDR2 transfers four words on each clock cycle and I > don't know for sure, but I expect DDR3 transfers eight words on each > clock cycle. There are also electrical differences because typical > TTL/CMOS levels just don't cut it anymore at the rates data moves in > and out of these devices. > > I don't think you are going to find good information for a newbie in > using SDRAM modules. I recall that I looked once and didn't find the > info in data sheets and such like I could find for individual RAM data > sheets. So for starters, you might want to use RAM chips and not > modules. Also for starters, I would suggest that you work with SDRAM > instead of the more complex faster devices. Once you have SDRAM under > your belt, the others will not be such a large step. > Hi, Yet another newbie question: Is SDRAM fast enough to generate a 720p or 1024p video stream (VGA or DVI output) using a Spartan-3 or -3E FPGA ?Article: 151271
On 15 Mrz., 02:28, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote: > > You can fry an FPGA with VDHL and vendor synthesis software. > > This has been demonstrated at the FPL conference a decade ago. > > I am quite surprised about this. Can you provide any additional > material on how this was achieved? > > There aren't any scenarios, other than internal tri-state contention, > that I can come up with to make this happen with a proven tool chain. It was at FPL1999 that someone presented this as a sidenote in a presentation about some other topic. They said they were able to damage Altera FPGAs by instantiating ring counters. This resulted in spontaneous applause by the Xilinx crowd in the audience but the presenter made clear that this attack also applies to Xilinx Fpgas and that it is computationally infeasible to detect such attacks. I just browsed through the list of papers of that conference http://www.informatik.uni-trier.de/~ley/db/conf/fpl/fpl1999.html but can't remember, which paper it was. There were some prominent people from Xilinx present (Peter Alfke, Steve Trimberger, Steve Guccione and some other Steve). Maybe one of them remembers. Kolja Sulimma cronologicArticle: 151272
hi kolja 1999 look ot of date ! anyway if peoples want to damage fpgas they bought they will have better result with an axe ! i asked bit-stream spec to not spend time to find how to not damage my fpgas if you insist they will sell undamagable fpgas (that simply don't work at all)Article: 151273
On Sat, 19 Mar 2011 09:03:50 -0500, sdaau wrote: > Hi all, > > I am speculating about starting an FPGA based project soon, which may > need to use RAM. So I'd like to leverage the (relative) abundance (and > thus, hopefully low price) of desktop (or laptop) PC memory modules for > this purpose - and I was wandering if the community had any comments or > suggestions. > > Primary points of interest are: > > * What is, in your opinion, the cheapest type of DIMM (desktop?) and > SO-DIMM (laptop?) modules currently available (2010-2011)? From a quick > scan, I can find > ** SO DIMM 200-PIN, 1x512 MB module (DDR2 SDRAM) 667 MHz ** DIMM > 240-PIN, 1x1GB module (DDR3 SDRAM) 1066 MHz .. for approx the same price > ( below 13 Euro per GB). I'm aware this may be location dependent - but > would the above represent the (currently) cheapest/most abundant modules > on the market? If not, what would you consider cheapest/most abundant - > and what would be a good reference (website) to consult? > > * Do the PCB sockets for the diverse module types differ significantly > in price (maybe SO-DIMM sockets usually cost twice as much as DIMM?) > Also, any (negative) experiences in soldering any of these by hand? > > * When they say stuff like "DDR2 SO-DIMM memory modules commonly have > clock speeds from 200 MHz up to 800 MHz" > (http://en.wikipedia.org/wiki/SO-DIMM), I'd assume they talk of max > frequencies - does that still mean, that I can clock the modules with > _less_ of a frequency (say 50 MHz)? > > * Given that, say, "DDR2 is neither forward nor backward compatible with > either DDR or DDR3" (http://en.wikipedia.org/wiki/DDR2_SDRAM), obviously > there is a need for dedicated hardware signaling interface for each type > of RAM. Is there something like a 'base interface' (say, maybe something > like SPI), which would be relatively easy to use, and that all RAM > modules would support (at the expense of reaching top speeds)? > > * Assuming that there (most likely) isn't such a 'base interface' for > all RAM types, what (of the currently cheapest and most available types > of modules) would you feel is easiest to learn to interface with? > > I'd also love to hear any other considerations in this type of usage > that I may have missed - as well as any links to tutorials/previous > projects using FPGA and these types of RAM for PCs.. > > > Looking forward to any responses - thanks in advance, Cheers! > > > > --------------------------------------- Posted through > http://www.FPGARelated.com The differences in prices between desktop and laptop DIMMs are small, just look on Newegg to get an idea of what prices are like. Assuming you are using an FPGA that can support it you want to use DDR3 because that the current generation of DRAMs and it's cheapest and highest bandwidth. However not all FPGAs can support it so use the most recent generation that the target FPGA can support.Article: 151274
On Mar 19, 10:48=A0am, Kolja Sulimma <ksuli...@googlemail.com> wrote: > On 15 Mrz., 02:28, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote: > > > > You can fry an FPGA with VDHL and vendor synthesis software. > > > This has been demonstrated at the FPL conference a decade ago. > > > I am quite surprised about this. Can you provide any additional > > material on how this was achieved? > > > There aren't any scenarios, other than internal tri-state contention, > > that I can come up with to make this happen with a proven tool chain. > > It was at FPL1999 that someone presented this as a sidenote in a > presentation about some > other topic. They said they were able to damage Altera FPGAs by > instantiating ring counters. > This resulted in spontaneous applause by the Xilinx crowd in the > audience but the presenter > made clear that this attack also applies to Xilinx Fpgas and that it > is computationally infeasible > to detect such attacks. > > I just browsed through the list of papers of that conferencehttp://www.in= formatik.uni-trier.de/~ley/db/conf/fpl/fpl1999.html > but can't remember, which paper it was. > > There were some prominent people from Xilinx present (Peter Alfke, > Steve Trimberger, Steve Guccione and some other Steve). Maybe one of > them remembers. > > Kolja Sulimma > cronologic What you remember as a ring counter was probably a ring oscillator that was used to get a high frequency clock to drive the rest of the FPGA logic with a high toggle rate. Without system level considerations and monitoring the power requirements could easily push the device into a thermal region that would damage the device. The HDL and bitstream itself is perfectly valid, it is the thermal management that is to blame for the destruction. The failure points would likely be wide spread through the device. This failure mode isn't the same as a badly constructed bitstream violating DRC rules and creating device damage that would be difficult to detect as it may only exist at a few points within the device. Recent Xilinx FPGAs with the System Monitor feature (renamed to XADC in the 7 Series families) have the ability to shutdown the device if the junction temperature exceeds user defined values. This can be used to prevent this type of thermal damage, but it would not be able to prevent localized damage due to bad bitstreams. Ed McGettigan -- Xilinx Inc.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z