Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Wed, 27 Jul 2011 05:17:32 -0700, radarman wrote: > On Jul 26, 2:04Â am, Antti <antti.luk...@googlemail.com> wrote: >> Hi >> >> its maybe not so commonly known that there have been products using >> Actel secure FPGA's have been cloned already many years ago (readback >> done by dark engineers at Actel), few month ago a paper was published >> indicating that ProAsic3 (and other newest Actel FPGA's) have master >> key that is known not only inside Actel but also for the dark side >> outside the company. There is at least one known successful Actel >> ProAsic3 based product cloning done (assumed readback done at Actel >> fab, not outside). >> >> following post has link to documents that show that Xilinx V2/V4/V5 are >> vulnerable as well. >> >> http://it.slashdot.org/story/11/07/21/1753217/FPGA-Bitstream- Security... >> >> P.S. We do not have more info nor the master keys, please do not ask :) >> >> Antti Lukatshttp://trioflex.blogspot.com/ > > No one should ever assume the device security offered is 100% > uncrackable. I used to know a guy who did legit "dark engineering" for > government devices, and it was amazing to hear stories of drilling out > holes in "secure devices" and extracting data using microscropic probes. > Another engineer I knew has a collection of IC's embedded in epoxy - the > company he worked for would shave them layer by layer to extract the > design physically. (So no, going to an ASIC won't necessarily be 100% > secure either) > > If man can make it, man can break it. > > The trick is to make it more expensive for the cloners to crack than it > would be to just license, buy, or reverse engineer another way. Besides, > a lot of places still send bit streams to China for programming during > assembly, and at that point, adding bit-stream security is a bit like > setting the deadbolt on an already open, and empty, barn. More like shipping all the barn's contents to a pack of known thieves, and asking them to please put them back in your barn and lock the door behind them when they leave. > A better metric for FPGA bitstream security, or any security product, is > the cost per breach and/or time per breach. Assume it can be breached, > and pick a method where the [cost/time]/[breach] equation works out in > your favor. BTW - this also means that devices with a master key are > very bad - because the time/breach is only paid once, and you can rest > assured, someone besides the manufacturer has it already. > > For an example of this done right, there is an IBM crypto chip that I > believe is still unbroken - but it has wires around the die that control > power the SRAM memory holding the crypto keys. If you drill into the > package, and cut one of the wires, the device loses its memory - and > becomes a dud. Obviously, you also have to do this work with the chip > in-system, and running, for the same reason. This is the equivalent of > the lock on an underground bank vault. > > We will know FPGA vendors are equally serious when the offer a part with > that level of security. Until then, it's pretty much the equivalent of > the standard locks on our front doors. Good enough to keep the riff-raff > out, but not enough to keep the serious thieves away. Another problem with any high-secure device, whether it be electronic or physical, is that the more it protects you from someone else's maliciousness, the more it'll harm you when you make an honest mistake. To extend your bank vault analogy, consider what happens if you lock yourself in, or lose the key/combination/whatever. -- www.wescottdesign.comArticle: 152251
On Jul 27, 11:53=A0am, "RCIngham" <robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote: >It should be: > ... > sync: process (CLK) > begin > =A0 if rising_edge(CLK) then >... > end process sync; actually it should be sync: process (all) begin ... end process; At least there was VHDL-2008 support announced for ISE11.1... KoljaArticle: 152252
On Jul 27, 5:59=A0am, "RCIngham" <robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com> wrote: > I should have added that this is also at:http://forums.xilinx.com/t5/Gene= ral-Technical-Discussion/VHDL-horror-... > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com Cute article. If only Xilinx would take some of their own advice on resets... Tip #4 - Use active high control signals. (For some reason MIG defaults to active low reset). Where's the tip on running your VHDL through syntax check before publishing it in a journal? Cheers, GaborArticle: 152253
dear all, i want to store image from PC to BRAM of an FPGA.i have image 192x96 size. 1) which type of interfacing should i use to transfer image into BRAM from PC 2) how to write a program for it? do you have any material on this please tell me. and i wrote a program for it.it is synthesizer. but how to sending my pixel values i am not getting. see below my code. library ieee; use ieee.std_logic_1164.all; use ieee.std_logic_unsigned.all; entity BRAM is port (CLK : in std_logic; WE : in std_logic; EN : in std_logic; ADDR : in std_logic_vector(14 downto 0); DI : in std_logic_vector(7 downto 0); DO : out std_logic_vector(7 downto 0)); end BRAM; architecture syn of BRAM is type ram_type is array (18431 downto 0) of std_logic_vector (7 downto 0); signal RAM: ram_type; begin process (CLK) begin if CLK'event and CLK = '1' then if EN = '1' then if WE = '1' then RAM(conv_integer(ADDR)) <= DI; end if; DO <= RAM(conv_integer(ADDR)) ; end if; end if; end process; end syn; please help me on this.. best regards, balu --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152254
Hi all, Just a followup to the previous post, as it seems I got some kind of a closure: > So, I decided to study this a bit on a simpler example; ... > > http://sdaaubckp.sourceforge.net/post/vhd_counter_aw/ > I have now uploaded counter_aw2.vhd, test_twb2.vhd and aw2.ucf on the same location (no changes in previous files). First of all, I got a hint at "Buffer_type BUFG ignored? #8" http://forums.xilinx.com/t5/Implementation/Buffer-type-BUFG-ignored/m-p/167766#M3502 about synthesis of asynchronous vs synchronous reset counter. So I decided to take a look at the synchronous version from the start (counter_aw2.vhd, uncommented part). Again behavioral sim was I expected it to be, and post-map sim showed X's and timing violation. Now, the thing is that I wrote the testbench more-less randomly, just tossing arbitrary signal changes and WAIT delays, just to see 'in general' how the resulting circuit would behave. So, I had ended up enforcing 'clk' and 'enable' testbench signals to change at the *same* moment in time. That may be good enough for behavioral sim, but not for post-map - and to confirm the previous posts, this is the essence of the problem. So I just inserted a 'WAIT for 1 ns' delay in the testbench (test_twb2.vhd), and looped the rest of the signals - and there were no more timing violations in post-map. (noting that if there is no such explicit loop, the whole process will - loop moving the phase for 1ns each iteration; and eventually causing the phase between 'enable' and 'clk' to be again zero, thus causing periodic timing violations). Then I tried delaying for PERIOD-1ns (just to have the enable rise just before clk) - and that worked fine as well. Now, my guess is that, if I was working with external clk and enable, I couldn't just delay the signals for as many nanoseconds as I please, just to avoid timing violations - so I'd have to work according to some spec. However, in my case, the only thing external signal is the clock, and enable and reset would be calculated from it - hence there will be some inherent delay between clk and enable; and that would further limit the usage of the counter. While talking of 'limitations': expanding the data out in bit lines and zooming in (in isim) will reveal that the glitches are due to different propagation times of individual bit changes (so they occur only between particular value transitions) - so, in fact, nothing strange there (as I first thought) :) Measuring (in isim) the time between the clk posedge and change of data (cout) will reveal about 5.5 ns delay. Just for the heck of it, I tried to limit that with a timing constraint in the .ucf file: ... TIMEGRP "couts" OFFSET = OUT 4 ns AFTER COMP "clk"; .. and immediately after, post-map static analysis failed: -- Timing constraint: TIMEGRP "couts" OFFSET = OUT 4 ns AFTER COMP "clk"; -- 16 paths analyzed, 16 endpoints analyzed, 16 failing endpoints -- 16 timing errors detected. -- Minimum allowable offset is 5.878ns. So, I guess this tells me: if I sample the cout 6 ns after clk posedge, I should have safe cout data; so for the testbench clock @50Hz, I could initiate count at clk posedge, and consider to have the right data @ next negedge, 10 ns after. However, even with a separate process: process(clk) begin if falling_edge(clk) then cout <= std_logic_vector(pre_count); end if; end process; .. glitches will still be there in post-map. And I guess, even if 'buffered' one more time, they will still occur: my guess is, synthesizer would instantiate FF for the bit buffer anyways, so there will be still different length routes to them -> different delays -> glitch. So the min 6ns wait would be needed in respect to 'when does the rest of the engine read this data' (rather than, 'when to read for a buffer' to avoid seeing glitches altogether). In the end, even if I could somehow mask the glitches and avoid seeing them, metastability is still inherent (http://www.asic-world.com/tidbits/metastablity.html) in reality; so I guess this is as good as it gets in post-map sim (given that my testbench is 'arbitrarily written'; and the only constraint I have in the aw2.ucf is clock @100 MHz). And, of course: getting rid of the X's and timing violations in post-map sim, doesn't mean that post-route sim will be just as well behaving :) But at least I have some sort of understanding from a simple example to go along with, when tackling that - thank you all for the help! Cheers! --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152255
Hey all-- So my experience with the native bitstream compression algorithms provided by the FPGA vendors has been that they don't actually achieve all that much compression. This makes sense given that the expansion has to be accomplished in the FPGA hardware, and so can't be all that aggressive. I generally find myself programming FPGAs from microcontrollers, and so I've got clock cycles and RAM to throw at the problem to try to do better. We've had some limited luck implementing really stupid RLE, just compressing the runs of 0x00 and 0xFF and calling it a day. But I was wondering whether anyone's found a better, more universal approach. The ideal would be a compression/expansion library with fairly good compression ratios, where the expansion side could operate without malloc and on-the-fly on a stream of data rather than requiring large buffers. The compression side could use phenominal amounts of compute power and RAM, since it would be happening on the PC. But the goal would be able to decompress on something like an LPC1754 (32kB RAM total). Anyone have any thoughts? -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 152256
Rob Gaddi wrote: > Hey all-- > > So my experience with the native bitstream compression algorithms > provided by the FPGA vendors has been that they don't actually achieve > all that much compression. > The ideal would be a compression/expansion library with fairly good > compression ratios, where the expansion side could operate without > malloc and on-the-fly on a stream of data rather than requiring large > buffers. It won't be generally possible to beat vendor provided basic compression more then by a factor of ~1.5 or so. The gain of 1.5 times wouldn't really improve anything. > The compression side could use phenominal amounts of compute > power and RAM, since it would be happening on the PC. But the goal > would be able to decompress on something like an LPC1754 (32kB RAM total). > > Anyone have any thoughts? Just try 7zip on the FPGA binaries and see for yourself. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.comArticle: 152257
On 7/28/2011 5:15 PM, Vladimir Vassilevsky wrote: > > > Rob Gaddi wrote: > >> Hey all-- >> >> So my experience with the native bitstream compression algorithms >> provided by the FPGA vendors has been that they don't actually achieve >> all that much compression. > >> The ideal would be a compression/expansion library with fairly good >> compression ratios, where the expansion side could operate without >> malloc and on-the-fly on a stream of data rather than requiring large >> buffers. > > It won't be generally possible to beat vendor provided basic compression > more then by a factor of ~1.5 or so. The gain of 1.5 times wouldn't > really improve anything. > >> The compression side could use phenominal amounts of compute power and >> RAM, since it would be happening on the PC. But the goal would be able >> to decompress on something like an LPC1754 (32kB RAM total). >> >> Anyone have any thoughts? > > Just try 7zip on the FPGA binaries and see for yourself. > > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultant > http://www.abvolt.com Just did, on an FPGA with an admittedly small fill ratio. The uncompressed bitstream for an XC3S200 is 1,047,616 bits. Using Xilinx bitstream compression gets it down to 814,336 bits, or about 100kB. 7-Zip knocked it down to 16kB. Another design uses a decent about of an XC6S45. Native size is 11,875,104 bits (~1.5MB). Bitstream compresson gives me 1.35MB. 7-Zip gives me 395kB. I've got a project coming up around an Altera Arria II, with 30Mb of configuration. If I could get a 3:1, 4:1 compression ratio it would be a pretty radical savings on flash size. -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 152258
I cant really see why you are trying to do this. If its just for fun then great, but flash isn't that expensive and you are only talking about 30Mb not 30Gb. Jon --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152259
On 29/07/2011 09:20, maxascent wrote: > I cant really see why you are trying to do this. If its just for fun then > great, but flash isn't that expensive and you are only talking about 30Mb > not 30Gb. > > Jon > > --------------------------------------- > Posted through http://www.FPGARelated.com Many embedded processors have a limited amount of Flash memory, so it would be an advantage to efficiently compress a bitstream to save a component and IO. -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.ukArticle: 152260
On Thu, 28 Jul 2011 17:40:25 -0700, Rob Gaddi wrote: > On 7/28/2011 5:15 PM, Vladimir Vassilevsky wrote: >> >> >> Rob Gaddi wrote: >> >>> Hey all-- >>> >>> So my experience with the native bitstream compression algorithms >>> provided by the FPGA vendors has been that they don't actually achieve >>> all that much compression. >> >>> The ideal would be a compression/expansion library with fairly good >>> compression ratios, where the expansion side could operate without >>> malloc and on-the-fly on a stream of data rather than requiring large >>> buffers. >> >> It won't be generally possible to beat vendor provided basic >> compression more then by a factor of ~1.5 or so. The gain of 1.5 times >> wouldn't really improve anything. >> >>> The compression side could use phenominal amounts of compute power and >>> RAM, since it would be happening on the PC. But the goal would be able >>> to decompress on something like an LPC1754 (32kB RAM total). >>> >>> Anyone have any thoughts? >> >> Just try 7zip on the FPGA binaries and see for yourself. >> >> >> Vladimir Vassilevsky DSP and Mixed Signal Design Consultant >> http://www.abvolt.com > > Just did, on an FPGA with an admittedly small fill ratio. The > uncompressed bitstream for an XC3S200 is 1,047,616 bits. Using Xilinx > bitstream compression gets it down to 814,336 bits, or about 100kB. > 7-Zip knocked it down to 16kB. > > Another design uses a decent about of an XC6S45. Native size is > 11,875,104 bits (~1.5MB). Bitstream compresson gives me 1.35MB. 7-Zip > gives me 395kB. > > I've got a project coming up around an Altera Arria II, with 30Mb of > configuration. If I could get a 3:1, 4:1 compression ratio it would be > a pretty radical savings on flash size. A design done for a client, using Virtex 6, with gzip -9 for compression. Figures are from memory and are approximate. Uncompressed: 7.5MBytes compressed: a few 10s of kB (empty FPGA) compressed: 500kBytes (mostly empty FPGA) compressed: 2MBytes (mostly full FPGA) compressed: 7.5MBytes (with bitstream encryption) Note: there's no point using compression with encryption, as the encrypted images don't compress. I used gzip as the decompresser is open source and fairly "light". The Xilinx built-in compression doesn't do a good job because it merely joins identical frames. The chance of getting identical frames is low for any design that uses a reasonable amount of the fabric. (If you're not using most of the fabric, you could be using a smaller, cheaper device.) OTOH, if you are using bitstream encryption, the built-in compression is the only compression you can use and it will be better than nothing. Regards, AllanArticle: 152261
From my memory, Altera has a much better "native" compression than Xilinx, with typical compression-ratios of about 50%. Regards, Thomas www.entner-electronics.comArticle: 152262
>On 29/07/2011 09:20, maxascent wrote: >> I cant really see why you are trying to do this. If its just for fun then >> great, but flash isn't that expensive and you are only talking about 30Mb >> not 30Gb. >> >> Jon >> >> --------------------------------------- >> Posted through http://www.FPGARelated.com > >Many embedded processors have a limited amount of Flash memory, so it >would be an advantage to efficiently compress a bitstream to save a >component and IO. > >-- >Mike Perkins >Video Solutions Ltd >www.videosolutions.ltd.uk > Well you can get serial flash devices that use a small amount of IO and have large capacities. I dont think you would use the internal flash of a processor to store a bitstream unless it was very small. I just cant see the point of going to all the trouble to do this unless you could shrink the bitstream to almost nothing. Jon --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152263
On Jul 28, 8:40=A0pm, Rob Gaddi <rga...@technologyhighland.com> wrote: > On 7/28/2011 5:15 PM, Vladimir Vassilevsky wrote: > > > > > > > > > > > > > Rob Gaddi wrote: > > >> Hey all-- > > >> So my experience with the native bitstream compression algorithms > >> provided by the FPGA vendors has been that they don't actually achieve > >> all that much compression. > > >> The ideal would be a compression/expansion library with fairly good > >> compression ratios, where the expansion side could operate without > >> malloc and on-the-fly on a stream of data rather than requiring large > >> buffers. > > > It won't be generally possible to beat vendor provided basic compressio= n > > more then by a factor of ~1.5 or so. The gain of 1.5 times wouldn't > > really improve anything. > > >> The compression side could use phenominal amounts of compute power and > >> RAM, since it would be happening on the PC. But the goal would be able > >> to decompress on something like an LPC1754 (32kB RAM total). > > >> Anyone have any thoughts? > > > Just try 7zip on the FPGA binaries and see for yourself. > > > Vladimir Vassilevsky > > DSP and Mixed Signal Design Consultant > >http://www.abvolt.com > > Just did, on an FPGA with an admittedly small fill ratio. =A0The > uncompressed bitstream for an XC3S200 is 1,047,616 bits. =A0Using Xilinx > bitstream compression gets it down to 814,336 bits, or about 100kB. > 7-Zip knocked it down to 16kB. > > Another design uses a decent about of an XC6S45. =A0Native size is > 11,875,104 bits (~1.5MB). =A0Bitstream compresson gives me 1.35MB. =A07-Z= ip > gives me 395kB. > > I've got a project coming up around an Altera Arria II, with 30Mb of > configuration. =A0If I could get a 3:1, 4:1 compression ratio it would be > a pretty radical savings on flash size. > > -- > Rob Gaddi, Highland Technology > Email address is currently out of order The algorithm that 7-Zip uses internally for .7z file compression is LZMA: http://en.wikipedia.org/wiki/Lzma One characteristic of LZMA is that its decoder is much simpler than the encoder, making it well-suited for your application. You would be most interested in the open-source LZMA SDK: http://www.7-zip.org/sdk.html They provide an ANSI C implementation of a decoder that you can port to your platform. Also, there is a reference application for an encoder (I believe also written in C). I have used this in the past to compress firmware for embedded systems with good success. I use the encoder as a post-build step to compress the firmware image into an LZMA stream (note that the compression is not done with 7-Zip, as the . 7z format is a full-blown archive; the reference encoder just gives you a stream of compressed data, which is what you want). The resulting file is then decompressed on the embedded target at firmware update time. The decoder source code is most amenable to porting to 32- bit architectures; I have implemented it on the LPC2138 ARM7 device (with the same 32 KB of RAM as your part) as well as a few AVR32UC3 parts. A couple other things: I originally did this ~4 years ago with a much older version of the SDK; it's possible that things have changed since then, but it should still be worth a look. LZMA provides good compression ratios with a decoder that in my experience runs well on embedded platforms. Secondly, you do have to be careful with the parameters you use at the encoder if you want to bound the amount of memory required at the decoder. More specifically, you need to be careful which "dictionary size" you use for the encoder. As you might expect, a larger dictionary gives you better compression ratios, but the target running the decoder will require at least that much memory (e.g. an 8 KB dictionary size will require at least 8 KB of memory for the decoder). JasonArticle: 152264
Rob Gaddi <rgaddi@technologyhighland.com> wrote: >Hey all-- > >So my experience with the native bitstream compression algorithms >provided by the FPGA vendors has been that they don't actually achieve >all that much compression. This makes sense given that the expansion >has to be accomplished in the FPGA hardware, and so can't be all that >aggressive. > >I generally find myself programming FPGAs from microcontrollers, and so >I've got clock cycles and RAM to throw at the problem to try to do >better. We've had some limited luck implementing really stupid RLE, >just compressing the runs of 0x00 and 0xFF and calling it a day. But I >was wondering whether anyone's found a better, more universal approach. The problem is that Hufman based algorithms don't gain that much. I recall someone has had lots of succes using an RLE like scheme on nibbles (4 bit) instead of whole bytes. Look here: http://www.sulimma.de/prak/ws0001/projekte/ralph/Projekt/index.htm -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------Article: 152265
Let say I have two DVI streams - generated by two encoders, those have different video contents but same pixel clock The two tmds streams travel thru cables then - are decoded by two decoders - then fed into an FPGA The question is how the two clocks at the output of the encoders look like, are they the same? Can we use only one clock for both channel to clock the data in the FPGA? ======== My theory is that, the original clock goes to two 10x then divided back 1/10, so they are supposedly be in same phase... or what else ???Article: 152266
Rob Gaddi wrote: > On 7/28/2011 5:15 PM, Vladimir Vassilevsky wrote: >> Rob Gaddi wrote: >> >>> So my experience with the native bitstream compression algorithms >>> provided by the FPGA vendors has been that they don't actually achieve >>> all that much compression. >> >> It won't be generally possible to beat vendor provided basic compression >> more then by a factor of ~1.5 or so. The gain of 1.5 times wouldn't >> really improve anything. >> > Native size is > 11,875,104 bits (~1.5MB). Bitstream compresson gives me 1.35MB. 7-Zip > gives me 395kB. Interesting. I tried to compress JBCs from heavily used Altera Cyclone, got only about x1.5 of compression. As for compression algorithm, something like a bit oriented LZSS or LZW would be easy to implement. The decompression part is trivial. If you don't care about speed, the compression part is very simple, too. I don't know if the further sophistication of the algorithm would do much of a difference. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.comArticle: 152267
[ NB: I've added comp.compression to the mix ] Jason wrote: > Rob Gaddi wrote: > >> Just did, on an FPGA with an admittedly small fill ratio. The >> uncompressed bitstream for an XC3S200 is 1,047,616 bits. Using Xilinx >> bitstream compression gets it down to 814,336 bits, or about 100kB. >> 7-Zip knocked it down to 16kB. >> >> Another design uses a decent about of an XC6S45. Native size is >> 11,875,104 bits (~1.5MB). Bitstream compresson gives me 1.35MB. 7-Zip >> gives me 395kB. >> >> I've got a project coming up around an Altera Arria II, with 30Mb of >> configuration. If I could get a 3:1, 4:1 compression ratio it would be >> a pretty radical savings on flash size. > > The algorithm that 7-Zip uses internally for .7z file compression is > LZMA: > > http://en.wikipedia.org/wiki/Lzma > > One characteristic of LZMA is that its decoder is much simpler than > the encoder, making it well-suited for your application. You would be > most interested in the open-source LZMA SDK: > > http://www.7-zip.org/sdk.html > > They provide an ANSI C implementation of a decoder that you can port > to your platform. Also, there is a reference application for an > encoder (I believe also written in C). I have used this in the past to > compress firmware for embedded systems with good success. I use the > encoder as a post-build step to compress the firmware image into an > LZMA stream (note that the compression is not done with 7-Zip, as the . > 7z format is a full-blown archive; the reference encoder just gives > you a stream of compressed data, which is what you want). The > resulting file is then decompressed on the embedded target at firmware > update time. The decoder source code is most amenable to porting to 32- > bit architectures; I have implemented it on the LPC2138 ARM7 device > (with the same 32 KB of RAM as your part) as well as a few AVR32UC3 > parts. > > A couple other things: I originally did this ~4 years ago with a much > older version of the SDK; it's possible that things have changed since > then, but it should still be worth a look. LZMA provides good > compression ratios with a decoder that in my experience runs well on > embedded platforms. Secondly, you do have to be careful with the > parameters you use at the encoder if you want to bound the amount of > memory required at the decoder. More specifically, you need to be > careful which "dictionary size" you use for the encoder. As you might > expect, a larger dictionary gives you better compression ratios, but > the target running the decoder will require at least that much memory > (e.g. an 8 KB dictionary size will require at least 8 KB of memory for > the decoder). Lempel-Ziv-Oberhumer (LZO) might also be worth investigating. http://en.wikipedia.org/wiki/Lempel–Ziv–Oberhumer The LZO library implements a number of algorithms with the following characteristics: * Compression is comparable in speed to deflate compression. * Very fast decompression * Requires an additional buffer during compression (of size 8 kB or 64 kB, depending on compression level). * Requires no additional memory for decompression other than the source and destination buffers. * Allows the user to adjust the balance between compression quality and compression speed, without affecting the speed of decompression. Regards.Article: 152268
can someone tell me if there is any differences in the die for the following 2 devices in virtex-7 XC7VX415TFFG1158 and XC7VX415TFFG1927 Both these devices are listed as having same logic resources. XC7VX415TFFG1158 has 35 X 35 mm package XC7VX415TFFG1927 has 45 X 45 mm package Can I assume that the only difference is the package and the underlying die is going to be the same? TIAArticle: 152269
Sharan wrote: > can someone tell me if there is any differences in the die for the > following 2 devices in virtex-7 > > XC7VX415TFFG1158 and XC7VX415TFFG1927 > > Both these devices are listed as having same logic resources. > XC7VX415TFFG1158 has 35 X 35 mm package > XC7VX415TFFG1927 has 45 X 45 mm package > > Can I assume that the only difference is the package and the > underlying die is going to be the same? > > TIA I'm not in the V7 camp yet, but to date that is the way Xilinx handles chip products. For each family and gate size there is one die, which has different bond-outs depending on the package. If it's a small die in a large package you end up with non-connected package pins. A large part in a small package ends up with unbonded IOB's. But open the parts in the FPGA editor and you see the exact same diagram with just different labels on the IOB's. In fact on the smaller packages you can use the unbonded IOB's as extra resources if you run out of fabric flops for example. -- GaborArticle: 152270
Not all applications necessarily configure FPGAs from flash. Many apps do it over the network, including initial configuration and subsequent partial reconfigurations. So it's beneficial to have a good bitstream compression scheme to reduce the amount of traffic. Thanks, Evgeni http://outputlogic.comArticle: 152271
Hello friends I can't understand how the Pipeline stages affect the multiplier core? I tested 1 stage and 4 stages. The result is given with 1 clock for 1 stage, and 4 clocks for 4 stages. When i decrease the clock frequency, no difference is happened. But as i increase the clock frequency for 4 stages,the result is given with 5 clocks! why? what is benefits of 4 stages in comparison to 1 stage? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152272
>Sharan wrote: >> can someone tell me if there is any differences in the die for the >> following 2 devices in virtex-7 >> >> XC7VX415TFFG1158 and XC7VX415TFFG1927 >> >> Both these devices are listed as having same logic resources. >> XC7VX415TFFG1158 has 35 X 35 mm package >> XC7VX415TFFG1927 has 45 X 45 mm package >> >> Can I assume that the only difference is the package and the >> underlying die is going to be the same? >> >> TIA > >I'm not in the V7 camp yet, but to date that is the way >Xilinx handles chip products. For each family and gate >size there is one die, which has different bond-outs >depending on the package. If it's a small die in a >large package you end up with non-connected package pins. >A large part in a small package ends up with unbonded >IOB's. But open the parts in the FPGA editor and you >see the exact same diagram with just different labels >on the IOB's. In fact on the smaller packages you >can use the unbonded IOB's as extra resources if you >run out of fabric flops for example. > >-- Gabor > Thanks, Gabor. Also, can you tell why certain devices are in a specific package while another device with more supported pins is in a smaller package. For example (example only, nothing specific to Virtex-7) v7vx1140t - min package is 45 x 45 mm package & supports 480 pins v7vx585T - min package is 35 x 35 mm package & supports 600 pins I am not sure why a 480 pin needs 45X45 mm package while 600 pins is put in a 35X35 mm package --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152273
Generally, more pipeline stages means higher maximum operating clock stages. This is because of reduced delay due to logic + routing between banks of registers. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 152274
>>Sharan wrote: >>> can someone tell me if there is any differences in the die for the >>> following 2 devices in virtex-7 >>> >>> XC7VX415TFFG1158 and XC7VX415TFFG1927 >>> >>> Both these devices are listed as having same logic resources. >>> XC7VX415TFFG1158 has 35 X 35 mm package >>> XC7VX415TFFG1927 has 45 X 45 mm package >>> >>> Can I assume that the only difference is the package and the >>> underlying die is going to be the same? >>> >>> TIA >> >>I'm not in the V7 camp yet, but to date that is the way >>Xilinx handles chip products. For each family and gate >>size there is one die, which has different bond-outs >>depending on the package. If it's a small die in a >>large package you end up with non-connected package pins. >>A large part in a small package ends up with unbonded >>IOB's. But open the parts in the FPGA editor and you >>see the exact same diagram with just different labels >>on the IOB's. In fact on the smaller packages you >>can use the unbonded IOB's as extra resources if you >>run out of fabric flops for example. >> >>-- Gabor >> > >Thanks, Gabor. > >Also, can you tell why certain devices are in a specific package while >another device with more supported pins is in a smaller package. > >For example (example only, nothing specific to Virtex-7) >v7vx1140t - min package is 45 x 45 mm package & supports 480 pins >v7vx585T - min package is 35 x 35 mm package & supports 600 pins > >I am not sure why a 480 pin needs 45X45 mm package while 600 pins is put in >a 35X35 mm package Look at the inter-ball dimensions on the different packages. --------------------------------------- Posted through http://www.FPGARelated.com
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z