Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Both A and X have what appears to be fairly similar PCIe cores for the S4 and V6 respectively. The A version uses streaming Avalon, the X version streaming AXI. I need 8X PCIe 2, with DMA support, I'm agnostic about A vs X, I'm comfortable with both. Has anyone done a comparison between the two, what are the key differences? I'd like to know what experiences people have had with these cores, what problems if any? Also do they provide Linux drivers? If so how complete are they? p.s. sorry for posting this twice, I inadvertently attached it to another thread. From puiterl@notaimvalley.nl Thu Mar 03 13:56:49 2011 Path: unlimited.newshosting.com!s02-b88.iad!npeersf01.iad.highwinds-media.com!npeer03.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!nx01.iad01.newshosting.com!newshosting.com!news2.euro.net!newsfeed.xs4all.nl!newsfeed6.news.xs4all.nl!xs4all!post.news.xs4all.nl!not-for-mail Message-Id: <4d700ea2$0$41110$e4fe514c@news.xs4all.nl> From: Paul Uiterlinden <puiterl@notaimvalley.nl> Subject: Re: Wow! No TestbenchWow! Newsgroups: comp.arch.fpga,comp.lang.vhdl,comp.lang.verilog Followup-To: comp.lang.verilog Date: Thu, 03 Mar 2011 22:56:49 +0100 References: <f8d79600-a7d4-4c5b-b5d3-655122ad1124@k14g2000pre.googlegroups.com> <p4r3k6d388a16nqau5a1va6qcstjgpgh3e@4ax.com> Organization: AimValley User-Agent: KNode/0.10.5 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7Bit Lines: 91 NNTP-Posting-Host: 195.242.97.150 X-Trace: 1299189410 news.xs4all.nl 41110 puiterl/[::ffff:195.242.97.150]:48213 X-Complaints-To: abuse@xs4all.nl Xref: unlimited.newshosting.com comp.arch.fpga:109069 comp.lang.vhdl:38545 comp.lang.verilog:20751 X-Received-Date: Thu, 03 Mar 2011 21:56:50 UTC (s02-b88.iad) Jonathan Bromley wrote: > On Thu, 27 Jan 2011 06:22:19 -0800 (PST), rickman wrote: > >> Now I am learning how Verilog >>allows hierarchical path references to signals for test benches. This >>is awesome!!! > > Not as awesome as the ability to call tasks > (procedures) in a module, from another module. > That's just the neatest thing ever, for > stimulus generation. This little example > should give you a flavour of what you can do: > > `timescale 1ns/1ns > > module simulatedUartTransmitter(output reg TxD); > time bitTime; > // > task setBitTime(input time newBitTime); > bitTime = newBitTime; > endtask > > task sendChar(input [7:0] char); > begin > // send start bit > TxD = 0; > // send eight data bits, LSB first > repeat (8) begin > #(bitTime) TxD = char[0]; > char = char >> 1; > end > // send stop bit > #(bitTime) TxD = 1; > #(bitTime); > end > endtask > // > initial TxD = 1; // line idles in "Mark" state > // > endmodule > > module justTryThisOne; > // connections > wire serial_TxD; > // stimulus generator instance > simulatedUartTransmitter txGenerator(.TxD(serial_TxD)); > // > // There's no DUT in this example, but you can still > // see the signal generator at work. > // > // code to generate some stimulus > initial begin > txGenerator.setBitTime(104000); // 9600Bd, roughly > #1_000_000; // idle awhile before starting > txGenerator.sendChar("h"); // ask the sig-gen... > txGenerator.sendChar("i"); // ...to send some data > txGenerator.sendChar("!"); // ...at our request > #1_000_000; // idle awhile at the end > end > endmodule > > Utterly fantastic when you want to do stuff like > mimicking the behaviour of a CPU in your testbench. > Just write a module that can generate read or write > cycles on a bus, then connect an instance of it to > your DUT and get it to do accesses in the same way > you'd expect your CPU to behave. > > Apologies if this is stuff you've seen already. > It's so useful that I couldn't resist sharing > the example (again). Thanks for sharing. I've kept this for reference, as I don't use Verilog normally but want to keep up to date as much as possible. It is fantastically more simple than the hoops and loops you must go through when implementing this in VHDL. Been there, done that (or rather: doing that). And I am saying this as a VHDL aficionado. One question though: if the task sendChar is called concurrently from different procedural blocks in a way that the calls are overlapping, I think the result would be a great mess (I am saying this as a not so great lover of how Verilog works). Is there a simple way to deal with collisions like that? Or will the simplicity be lost then for the most part? -- Paul Uiterlinden www.aimvalley.nl e-mail addres: remove the not.Article: 151076
In comp.arch.fpga, Jon Elson <jmelson@wustl.edu> wrote: > On 03/02/2011 04:26 PM, Stef wrote: > >> >> In the mean time I have found the parallel cable III schematic. It's so >> simple, I'm tempted to build it just for testing the above. In fact, it >> looks very familiar, a bit like the jtag wiggler interface. Must have one >> somewhere lying around. Hmm, just googled it, it's a bit different. >> > > Yes, I have repaired our several times when ESD or whatever popped the > one chip in there. Just a voltage level translator and buffer. Level translator? Are you sure have the parallel cable III? The schematic I got from the Xilinx website only uses 2 pieces of 74HC125 buffers, no translators. Only translation I see is some series resistors and probably the assumption that dropping 3V3 two times over a 1N5817 is just enough to get a high at the printer port. -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) The turtle lives 'twixt plated decks Which practically conceal its sex. I think it clever of the turtle In such a fix to be so fertile. -- Ogden NashArticle: 151077
Hello, I am currently working on a school project that involves interfacing an external video device with my Nexys2-1200 (XC3S1200E-4FG320) development board. The device has a resolution of 256x384 with 18-bit RGB video data per pixel (6-bits per color). To start, I took the 18-bit data and cut it down significantly to 8-bit color (R=3bits,G=3bits,B=2bits). This allowed me to use the built-in VGA port on the Nexys2. I then created 49152 bytes of BlockRAM in a "Simple Dual-Port" configuration using the CORE Generator in Xilinx ISE. This is enough RAM to hold the top half of the screen on my device (so 256x192). One port is used strictly for writing to RAM and the other port is strictly for reading from RAM. Using all this, I got the top-half of the screen to output through the VGA port. The 256x192 is sitting at the top-left corner of the VGA window and the rest is black. I'm actually surprised it works, haha. Anyway, I would like to take the project a step further and use the entire screen (the full 256x384) along with a higher color resolution of 16-bits per pixel (R=5bits,G=6bits,B=5bits). My problem is I don't know how to effectively use the external 16MB Micron PSDRAM (MT45W8MW16BGX-708) for this situation since the BlockRAM won't cut it. My VGA pixel clock is 25MHz (40ns), and the external clock to latch the pixel data from my device is 5.6MHz (180ns). If I use the Micron RAM in the default asynchronous mode, then the read/write access times are 70ns. This is obviously too slow for my pixel clock, but there are several different ways to operate that Micron PSDRAM. There is also free time during the blanking intervals of the video that I can work with. I guess my question is, what is the general strategy to create a video framebuffer using a external RAM module like the one I have? Is it even possible with this specific chip or do I need something faster and with dual ports? Thanks for your help, and sorry if I'm not being too clear about anything. Just let me know and I'll clarify it. Link to datasheet: http://www.micron.com/get-document/?documentId=444 --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151078
> Anyway, I would like to take the project a step further and use the entire > screen (the full 256x384) along with a higher color resolution of 16-bits > per pixel (R=5bits,G=6bits,B=5bits). My problem is I don't know how to > effectively use the external 16MB Micron PSDRAM (MT45W8MW16BGX-708) for > this situation since the BlockRAM won't cut it. My VGA pixel clock is > 25MHz (40ns), and the external clock to latch the pixel data from my device > is 5.6MHz (180ns). If I use the Micron RAM in the default asynchronous > mode, then the read/write access times are 70ns. This is obviously too > slow for my pixel clock, but there are several different ways to operate > that Micron PSDRAM. There is also free time during the blanking intervals > of the video that I can work with. I guess my question is, what is the > general strategy to create a video framebuffer using a external RAM module > like the one I have? Is it even possible with this specific chip or do I > need something faster and with dual ports? I think it should be fine. Use the MIG to an interface for your external RAM. Feed the incoming video into the MIG's FIFO and it'll write using burst mode. The 70 ns latency is only for switching to a new page - if you feed in a whole row of sequential writes this won't be such a big deal, and you can clock the RAM as fast as it'll go. Then read it back during the blanking period, or even in between writes if your calculations show that you'll have enough time. JoelArticle: 151079
Hi, I am searching for a cheap fpga board (with Altera or Xilinx FPGA) that has PCI-E or ExpressCard connector (x1 is high enough). I need just DDR(1/2/3)_SDRAM, GPIO connector, maybe USB-JTAG. If anyone knows a board that I've specificated above, please tell me! Thank you in advance! Regards, PeterArticle: 151080
Gabor <gabor@alacron.com> writes: > I thought I should try this with Virtex 5, since it > has larger LUT's and should therefore greatly reduce > the required logic. The results were less than > dramatic. XST still ends up with 65 LUT's for V5. > > So I tried again with V6. As far as I know the V6 > has a similar LUT to the V5, but suddenly XST > gives me only 35 LUT's (I checked other resources > to be sure it didn't also use DSP blocks). So > either V6 has more flexibly carry logic, or (more > likely) XST has been tuned up a bit to get better > results with V6 and the new optimization is not > applied to the older technology. XST for the "6" families uses a whole new parser. IIRC you can enable for older families (as an unsupported option) with a command line switch. Although I'm not entirely sure why a new *parser* would help the actual synthesis, as far as I know it just enables XST to parse a larger subset of the VHDL language. Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.co.uk/capabilities/39-electronic-hardwareArticle: 151081
When JTAG is used: 1. Is INIT_B driven LOW by FPGA? 2. Is INIT_B polled by FPGA to delay config? The docs are a bit fuzzy about this.Article: 151082
When JTAG is used: 1. Is INIT_B driven LOW by FPGA? 2. Is INIT_B polled by FPGA to delay config? The docs are a bit fuzzy about this. The questions are for Spartan 3AArticle: 151083
Peter I don't know if it meets your idea of cheap but have a look at http://www.enterpoint.co.uk/raggedstone/raggedstone2.html. The Raggedstone2 Spartan-6 development board comes a standard with a Xilinx Spartan-6 XC6SLX45T FPGA. This board is also likely to be offered with a bigger FPGA but that isn't maybe for a month or two. On-board USB is only a serial link but we do offer a USB programming cable option if that is what you need. We do offer student and academic discounts on this board. John Adair Enterpoint Ltd. On Mar 4, 6:56=A0am, "dormanpet...@gmail.com" <dormanpet...@gmail.com> wrote: > Hi, > > I am searching for a cheap fpga board (with Altera or Xilinx FPGA) > that has PCI-E or ExpressCard connector (x1 is high enough). > I need just DDR(1/2/3)_SDRAM, GPIO connector, maybe USB-JTAG. > > If anyone knows a board that I've specificated above, please tell me! > > Thank you in advance! > > Regards, > PeterArticle: 151084
On Mar 2, 12:38=A0pm, a s <nospa...@gmail.com> wrote: > On Mar 2, 5:52=A0pm, Gabor <ga...@alacron.com> wrote: > > > I didn't catch which device you are targeting, but I > > decided to try this myself with XST and Spartan 3A, > > using Verilog to see if there are any significant > > differences in synthesis performance. > > I am targeting Virtex4FX. > > > > > > > Here's the code: > > module count_bits > > #( > > =A0 parameter IN_WIDTH =3D 32, > > =A0 parameter OUT_WIDTH =3D 6 > > ) > > ( > > =A0 input wire =A0[IN_WIDTH-1:0] =A0data_in, > > =A0 output reg [OUT_WIDTH-1:0] =A0data_out > > ); > > > always @* > > begin : proc > > =A0 integer i; > > =A0 integer sum; > > =A0 sum =3D 0; > > =A0 for (i =3D 0;i < IN_WIDTH;i =3D i + 1) sum =3D sum + data_in[i]; > > =A0 data_out =3D sum; > > end > > > endmodule > > > And the results for the 32-bit case (XST) > > > Number of Slices: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 41 =A0out= of =A0 1792 =A0 =A0 2% =A0 > > Number of 4 input LUTs: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 73 =A0out of = =A0 3584 =A0 =A0 2% =A0 > > > which is very close to your original unrolled result. > > I get the same results with XST targeting V4. > > But that's really interesting how XST produces better results > with Verilog than with VHDL for basically exactly the same input. > > Running your module through Synopsys results again > in seemingly "optimum" 57LUTs and 34 slices. > > I find it pretty amusing how many options did we come up already > with such a "basic" problem as is counting ones in a word. ;-) > > Regards- Hide quoted text - > > - Show quoted text - Eight years ago (Sept/Oct 2003), we went through this exercise in the thread "Counting Ones" (I was posting as JustJohn back then, not John_H). See that thread for some ASCII art of the trees. I ended up with the following VHDL function that produces "optimum" 55 4-input LUTs for 32-bit vector input. I haven't seen anything better yet. I liked Andy's recursion suggestion, it'll take some thought to figure out how to auto-distribute the carry-in bits to the adders. Yesterday, Gabor posted 35 6-input LUTs. Gabor, what code did you use? I think a nice challenge to the C.A.F. group mind is to beat that. John L. Smith -- This function counts bits =3D '1' in a 32-bit word, using a tree -- structure with Full Adders at leafs for "minimum" logic utilization. function vec32_sum2( in_vec : in UNSIGNED ) return UNSIGNED is type FA_Arr_Type is array ( 0 to 9 ) of UNSIGNED( 1 downto 0 ); variable FA_Array : FA_Arr_Type; variable result : UNSIGNED( 5 downto 0 ); variable Leaf_Bits : UNSIGNED( 2 downto 0 ); variable Sum3_1 : UNSIGNED( 2 downto 0 ); variable Sum3_2 : UNSIGNED( 2 downto 0 ); variable Sum3_3 : UNSIGNED( 2 downto 0 ); variable Sum3_4 : UNSIGNED( 2 downto 0 ); variable Sum3_5 : UNSIGNED( 2 downto 0 ); variable Sum4_1 : UNSIGNED( 3 downto 0 ); variable Sum4_2 : UNSIGNED( 3 downto 0 ); variable Sum5_1 : UNSIGNED( 4 downto 0 ); begin for i in 0 to 9 loop Leaf_Bits :=3D in_vec( 3 * i + 2 downto 3 * i ); case Leaf_Bits is when "000" =3D> FA_Array( i ) :=3D "00"; when "001" =3D> FA_Array( i ) :=3D "01"; when "010" =3D> FA_Array( i ) :=3D "01"; when "011" =3D> FA_Array( i ) :=3D "10"; when "100" =3D> FA_Array( i ) :=3D "01"; when "101" =3D> FA_Array( i ) :=3D "10"; when "110" =3D> FA_Array( i ) :=3D "10"; when others =3D> FA_Array( i ) :=3D "11"; end case; end loop; Sum3_1 :=3D ( "0" & FA_Array( 0 ) ) + ( "0" & FA_Array( 1 ) ); Sum3_2 :=3D ( "0" & FA_Array( 2 ) ) + ( "0" & FA_Array( 3 ) ); Sum3_3 :=3D ( "0" & FA_Array( 4 ) ) + ( "0" & FA_Array( 5 ) ); Sum3_4 :=3D ( "0" & FA_Array( 6 ) ) + ( "0" & FA_Array( 7 ) ) + ( "00" & in_vec( 30 ) ); Sum3_5 :=3D ( "0" & FA_Array( 8 ) ) + ( "0" & FA_Array( 9 ) ) + ( "00" & in_vec( 31 ) ); Sum4_1 :=3D ( "0" & Sum3_1 ) + ( "0" & Sum3_2 ); Sum4_2 :=3D ( "0" & Sum3_3 ) + ( "0" & Sum3_4 ); Sum5_1 :=3D ( "0" & Sum4_1 ) + ( "0" & Sum4_2 ); result :=3D ( "0" & Sum5_1 ) + ( "000" & Sum3_5 ); return result; end vec32_sum2;Article: 151085
On 03/03/2011 05:27 PM, Stef wrote: > Level translator? Are you sure have the parallel cable III? The schematic > I got from the Xilinx website only uses 2 pieces of 74HC125 buffers, no > translators. Only translation I see is some series resistors and probably > the assumption that dropping 3V3 two times over a 1N5817 is just enough > to get a high at the printer port. > > Yes, they use the HC125 plus some passives as a really cheap level translator that is pretty insensitive to odd power supply voltages. JonArticle: 151086
On Friday, March 4, 2011 1:18:10 AM UTC-5, Joel Williams wrote: > > Anyway, I would like to take the project a step further and use the entire > > screen (the full 256x384) along with a higher color resolution of 16-bits > > per pixel (R=5bits,G=6bits,B=5bits). My problem is I don't know how to > > effectively use the external 16MB Micron PSDRAM (MT45W8MW16BGX-708) for > > this situation since the BlockRAM won't cut it. My VGA pixel clock is > > 25MHz (40ns), and the external clock to latch the pixel data from my device > > is 5.6MHz (180ns). If I use the Micron RAM in the default asynchronous > > mode, then the read/write access times are 70ns. This is obviously too > > slow for my pixel clock, but there are several different ways to operate > > that Micron PSDRAM. There is also free time during the blanking intervals > > of the video that I can work with. I guess my question is, what is the > > general strategy to create a video framebuffer using a external RAM module > > like the one I have? Is it even possible with this specific chip or do I > > need something faster and with dual ports? > > I think it should be fine. Use the MIG to an interface for your external > RAM. Feed the incoming video into the MIG's FIFO and it'll write using > burst mode. The 70 ns latency is only for switching to a new page - if > you feed in a whole row of sequential writes this won't be such a big > deal, and you can clock the RAM as fast as it'll go. Then read it back > during the blanking period, or even in between writes if your > calculations show that you'll have enough time. > > Joel Correct me if I'm wrong, but the last time I looked, MIG only supported the standard DDR, and DDR2 parts for Spartan 3, not the "specialty" parts like Cellular PSDRAM. Does the Nexsys2 kit come with some other IP to talk to the PSDRAM? Or do they just assume that it's a simple enough interface to roll your own? In burst mode it should certainly be more than fast enough to do what you want. You could even intersperse reading and writing as long as you keep one direction long enough to keep up the required throughput rate. Just buffer the reads and writes with block RAM. You need some sort of simple arbiter to switch between the read and write processes at a "burst" level. Then the camera input can be effectively asynchronous to the VGA screen refresh. -- GaborArticle: 151087
Stefano, I would _strongly_ recommend stubbing out the top level with all of the pins in the design, especially the "picky" ones that you pointed out. It may take some time up front, but well worth it not to get burned with a bad pin placement that cripples your design once you get boards back. You should not need the entire design completed, but you will need some logic in place to ensure that things don't get synthesized out by the tools. Regarding the comment about clocking the ADC with an FPGA output... This assumes that the FPGA-driven clock drives the sample rate of the ADC. Depending on what you need, performance-wise, this may still be fine. Something to keep in mind though, for sure. Good luck! -- Mike Shogren Director of FPGA Development Epiq Solutions www.epiq-solutions.comArticle: 151088
Peter, We bought a board awhile back from PLDA (who also sell PCIe IP to work on the board) with an Altera Arria GX device on it. http://www.plda.com/prodetail.php?pid=158 I believe it was about $1k, but maybe they have student discounts. Regards, -- Mike Shogren Director of FPGA Development Epiq Solutions www.epiq-solutions.comArticle: 151089
> > We bought a board awhile back from PLDA (who also sell PCIe IP to work > on the board) with an Altera Arria GX device on it.http://www.plda.com/prodetail.php?pid=158 > I should have mentioned the Xilinx SP605 for $500 (I forgot that it was PCIe since I have always used it stand-alone). -- Mike Shogren Director of FPGA Development Epiq Solutions http://www.epiq-solutions.comArticle: 151090
In comp.arch.fpga, Jon Elson <jmelson@wustl.edu> wrote: > On 03/03/2011 05:27 PM, Stef wrote: > >> Level translator? Are you sure have the parallel cable III? The schematic >> I got from the Xilinx website only uses 2 pieces of 74HC125 buffers, no >> translators. Only translation I see is some series resistors and probably >> the assumption that dropping 3V3 two times over a 1N5817 is just enough >> to get a high at the printer port. >> > > Yes, they use the HC125 plus some passives as a really cheap level > translator that is pretty insensitive to odd power supply voltages. Ah, got the impression you meant an explicit translator chip. But indeed the function of the circuit is some level conversion. :-) Just tried an even simpler version with resistors only (had no 74HC125 and 1N5817 available). That sort of worked, I was able to read the device ID and erase it. Programming seemed to go OK, but verify failed. Reading back the device and looking at the hex file showed most bytes where programmed OK. In fact, I got some simular behaviour using the platform cable USB. So the malfunction may not even be entirely atributed to the cheap-ass cable III imitation. ;-) So cable III (a proper one or good imitation) seems perfectly OK for the required job. Thanks for the input. -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) You know what they say -- the sweetest word in the English language is revenge. -- Peter BeardArticle: 151091
On Friday, March 4, 2011 7:30:57 AM UTC-5, aleksa wrote: > When JTAG is used: > 1. Is INIT_B driven LOW by FPGA? Yes. Right after power-on or after PROG_B de-assertion INIT_B will be driven low while the FPGA config logic initializes. > 2. Is INIT_B polled by FPGA to delay config? > Yes. If you externally pull down INIT_B you can delay the start of "master" config modes. > The docs are a bit fuzzy about this. Aren't we all fuzzy at times :)Article: 151092
Hi Guys, Hope one of you guys can help me out here. I have to supply a client a IP core that we have developed but we don't want to give the VHDL source. I have a few questions regarding delivery format. The core will run on a Xilinx FPGA. 1) Would the NGC file out of the synthesizer be the most appropriate delivery format? 2) How safe is NGC file (in regards to reverse-engineering)? 3) Can a NGC file synthesized for one device, say Spartan 3A DSP, be used in a design that targets another device, say Virtex4? 4) The IP core port has few GENERICS to configure the core. It looks like once the core has been synthesised (standalone) the generics are fixed to the default values and the design that uses the IP core (as a NGC file) is not able to change generics. ISE throws the following warning: Reading core <MA_FILTER.ngc>. WARNING:Xst:1474 - Core <MA_FILTER> was not loaded for <MA_FILTER_1> as one or more ports did not line up with component declaration. Declared input port <DATA_IN<17>> was not found in the core. Please make sure that component declaration ports are consistent with the core ports including direction and bus-naming conventions. WARNING:Xst:616 - Invalid property "gAVG_LEN 8": Did not attach to MA_FILTER_1. WARNING:Xst:616 - Invalid property "gIN_LEN 18": Did not attach to MA_FILTER_1. How can the design that uses the core be able to pass in GENERICS? 5) The core uses a custom package and the design that uses the core would also like to use the same package (there are few functions that the toplevel design would like to use). How do you deliver the package without giving the VHDL source? 6) The client would like to be able to simulate the core in their design using Modelsim. What needs to be provided to allow this? A search on google resulted in pre-compiled library but I couldn't find anything on how to generate a pre-compiled library for a core. Is the pre-compiled library the way to go? Thanks in advance Jason.Article: 151093
http://www.ebv.com/index.php?id=162&ct_ref=u103-c84&tx_ebvproductfe_pi1[uid]=2011 Smallest Cyclone IV GX, DDR2 RAM, embedded USB blaster and 24pins I/Os - 170 EU @ EBV.Article: 151094
>Hi Guys, > >Hope one of you guys can help me out here. I have to supply a client a >IP core that we have developed but we don't want to give the VHDL >source. I have a few questions regarding delivery format. The core >will run on a Xilinx FPGA. > >1) Would the NGC file out of the synthesizer be the most appropriate >delivery format? >2) How safe is NGC file (in regards to reverse-engineering)? >3) Can a NGC file synthesized for one device, say Spartan 3A DSP, be >used in a design that targets another device, say Virtex4? > >4) The IP core port has few GENERICS to configure the core. It looks >like once the core has been synthesised (standalone) the generics are >fixed to the default values and the design that uses the IP core (as a >NGC file) is not able to change generics. ISE throws the following >warning: > >Reading core <MA_FILTER.ngc>. >WARNING:Xst:1474 - Core <MA_FILTER> was not loaded for <MA_FILTER_1> >as one or more ports did not line up with component declaration. >Declared input port <DATA_IN<17>> was not found in the core. Please >make sure that component declaration ports are consistent with the >core ports including direction and bus-naming conventions. > >WARNING:Xst:616 - Invalid property "gAVG_LEN 8": Did not attach to >MA_FILTER_1. >WARNING:Xst:616 - Invalid property "gIN_LEN 18": Did not attach to >MA_FILTER_1. > >How can the design that uses the core be able to pass in GENERICS? > >5) The core uses a custom package and the design that uses the core >would also like to use the same package (there are few functions that >the toplevel design would like to use). How do you deliver the package >without giving the VHDL source? > >6) The client would like to be able to simulate the core in their >design using Modelsim. What needs to be provided to allow this? A >search on google resulted in pre-compiled library but I couldn't find >anything on how to generate a pre-compiled library for a core. Is the >pre-compiled library the way to go? > >Thanks in advance > >Jason. > I believe that most of the major simulators now support source file encryption. So you could possibly encrypt the source file for simulation and just give it to the client. This doesnt help you with regards to synthesis. I did read somewhere that there is a new standard being releases soon that will encrypt ip and enable it to be synthesised. I think the best you can hope for at the moment is to create a netlist for the target device. Jon --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151095
Actually XST supports encrypted sources. PLDA delivers their "EZ-DMA" core as an encrypted source. This allows users to customize the core. I'd be very surprised if the major third-party synthesis tools didn't also allow encrypted sources. -- GaborArticle: 151096
Gabor <gabor@alacron.com> wrote: > Actually XST supports encrypted sources. PLDA > delivers their "EZ-DMA" core as an encrypted > source. This allows users to customize > the core. I'd be very surprised if the major > third-party synthesis tools didn't also > allow encrypted sources. But if XST can decrypt it, then why can't someone else, if they find the decryption key? The question, then, is how valuable is the IP? How hard do you need to make it to reverse engineer? There is always the obfuscator, that replaces all the symbolic (and maybe human readable) names with random symbols, often made of 0, 1, O, I, and maybe l. Presumably also removing any comments that were in there. -- glen -- glenArticle: 151097
> I believe that most of the major simulators now support source file > encryption. So you could possibly encrypt the source file for simulation > and just give it to the client. This doesnt help you with regards to > synthesis. I did read somewhere that there is a new standard being > releases > soon that will encrypt ip and enable it to be synthesised. I think the > best > you can hope for at the moment is to create a netlist for the target > device. > > Jon Some of the IP cores can be easily decrypted. I am talking not about a particular cores, but particular vendor. In this case, many cores are encrypted with the same algorithm, so it is possible to decrypt them too. If this core is expensive - think about a good way of encryption. However, there was an issue, when a vendor asked 75k USD for a core and engineers made the core by themselves in 2 weeks.Article: 151098
>> I think it should be fine. Use the MIG to an interface for your external >> RAM. Feed the incoming video into the MIG's FIFO and it'll write using >> burst mode. The 70 ns latency is only for switching to a new page - if >> you feed in a whole row of sequential writes this won't be such a big >> deal, and you can clock the RAM as fast as it'll go. Then read it back >> during the blanking period, or even in between writes if your >> calculations show that you'll have enough time. >> >> Joel > > Correct me if I'm wrong, but the last time I looked, MIG > only supported the standard DDR, and DDR2 parts for > Spartan 3, not the "specialty" parts like Cellular > PSDRAM. Does the Nexsys2 kit come with some other > IP to talk to the PSDRAM? Or do they just assume > that it's a simple enough interface to roll your > own? In burst mode it should certainly be more than > fast enough to do what you want. You could even > intersperse reading and writing as long as you > keep one direction long enough to keep up the > required throughput rate. Just buffer the reads > and writes with block RAM. You need some sort > of simple arbiter to switch between the read > and write processes at a "burst" level. Then > the camera input can be effectively asynchronous > to the VGA screen refresh. > > -- Gabor Sorry, you're right, of course! The Nexys2's built in self test code contains a memory controller (NexysOnBoardMemCtrl.vhd) but it's not clear at first glance if it's advanced enough to support burst mode. Probably not. You could also try http://opencores.org/project,opb_psram_controller , which looks like it would support the Nexys2. There might also be some helpful hints in the comments of this video: http://www.youtube.com/watch?v=nyrllob-Juk JoelArticle: 151099
>On Wed, 02 Mar 2011 08:37:35 -0600, francesco_pincio wrote: > >> Hello! >> I'm new in the forum and just done an FPGA university course, very very >> small...we have only turned on/off led with finite state machine and so >> on...now i'm tryng to develope an IIR filter with XSA50 board form Xess >> with spartanIIe50 fpga. FIlter kernel is just a 2 pole system with a >> zero in 0, i would do a bandpass with changable passaband with >> pushbuttons; i 've idealized that main structure of the program would be >> a module with a counter for generating clock for the ADC/DAC, a module >> that pass this samples in the filtern kernel, the filter kernel iir >> itself and a module that passes filtererd samples to DAC; mainly i have >> 2 problems: >> >> 1) i can do only operation with radix-2 coefficient, so i can use only >> 1/2, 1/4 an so on. i don't understand how to pass a float value and >> multiply it > >Who needs floating point? You can do this all with fixed point >arithmetic. Let your coefficients range from -1 to 1, or 0 to 2, or >whatever you need, then implement your filter. In an HDL, this is a >matter of doing your multiplication, and picking the leftmost (or nearly >leftmost) bits out of the answer, instead of the rightmost. > >> 2)do i need a ram to store at least y[n-2] sample? > >Are you trying to do this in batch mode, or continuous? If it's >continuous, you should only need to keep two filter states. > >You will find that you need to keep more precision on your filter states >than you have on your incoming (and probably outgoing) data. > >This is not a subject that can be done justice to in a newsgroup posting, >and everyone and his sister wants to know. Do a web search on "IIR >Filter for FPGA" and you should find at least one tutorial. > >-- >http://www.wescottdesign.com > Excuse me for the time, but I'm waiting for a mail from the forum saying me anyone has replyed to my post...i want to thank for the help and the tips that you have given me. Now i try to restudy verilog concepts and apply your tips. --------------------------------------- Posted through http://www.FPGARelated.com
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z