Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi all, My simple question, is there any tools to auto-generate a vhdl instantiate template from the netlist file? I found an inefficient way by using the ECS :( 1) generate ECS symbol from edif, not good, take time to modify bus format on symbol pins 2) generate template from symbol Many thanks,Article: 93476
Hey guys, does anyone know where I can get VHDL/Verilog source for the Z8001/Z8002 processor? Thanks for any info! -Adam ajcrm125@gmail.comArticle: 93477
Ray, I'm not the best person to talk further on this topic, so I'll direct you to: http://www.xilinx.com/publications/xcellonline/xcell_55/xc_pdf/xcell55_all.pdf and read Steve Lass' article, specifically thru page 8. For those who don't want to download the whole issue of Xcell: http://www.xilinx.com/publications/xcellonline/xcell_55/xc_timing55.htm and not have to deal with pdf, either. You will miss the lovely artwork, but the content is the same. He details the new Xplorer tool which (I think) finds the optimal we used to find, and then creates the proper constraints and command file so that subsequent compiles do not have to start from scratch. http://www.xilinx.com/publications/xcellonline/xcell_55/xc_xplorer55.htm for the details, courtesy of Hitesh Patel. He also mentions PlanAhead, as well. Thanks to Steve & Hitesh for the timely appearance of this! And thanks to you Ray for the opportunity to post these useful links ... AustinArticle: 93478
I am not well versed with the drivers stuff. I have a pci bridge between the pci bus and the fpga. I configure the bridge for the DMA trasfers . I do not have driver asociated with the pci card. 1. Now what destination address should I give if want to transfer the bursts of data to the host and how do I read this data on the host. Should I write a driver for capturing this transfer? NiteshArticle: 93479
Ray Andraka wrote: > Jim Granville wrote: > > >> >> Yikes! >> One wonders how _CAN_ SW make a carefully floorplanned design go >> backwards ? By how much ? >> > > Is that the lazy routing, being so bad, it actually finds a longer > > path, than earlier SW ? > > Enough to make so a design that passed timing with the earlier tools > will not pass timing no matter what you do with the newer tools short of > hand routing it. about 10% loss in performance average in each major > revision. There was a huge hit going to 5.2. 7.1 seems to have a much > smaller degradation from 6.3. > > Yes, the routing got lazy so that it actually finds a longer path than > it did with earlier software. Quite often, it will not find the direct > connection to a neighboring cell, and instead routes it all over the > place, which adds delay, increases power consumption, and congests the > routing resources so that other nets also get a circuitous route so that > the overall timing is even further degraded. Is the direct connection lost, because something else uses that path, or could something as trivial as a 'length optimise' pass fix this ? It does not sound like floor-planned nets, are being given first-bite at the resource, either.... This does not sound like rocket science to fix, and if I were SW quality manager, I would place this issue at the top of "very rigorous quality of results metrics" Xilinx supposedly use.... Seems the Xilinx mindset thinks if things 'on average' are better, those cases where it goes backwards, don't really matter... >> I did wonder how Altera suddenly found power savings in SOFTWARE - >> perhaps they now do exactly this, clean up messy, but timing legal, >> routes ? Anyone in Altera comment ? > > > From what I understand, Altera is moving toward more delay based > clean-up. Xilinx has moved away from it, and is instead pursuing > capacitance based clean-up to reduce the power...which not only may miss > the mark, but also requires toggle rate information for each net. Once the tools have met timing, wouldn't a simple length reduction (which the place tools DO know) be a fast and efficent way to clean up the lazy nets ? length should correlate pretty well with delay and capacitance... Users would tolerate if this power task took longer, even a weekend run 'shaker algorithm' - when the code is only nearly working, power is less of a concern :) -jgArticle: 93480
Austin Lesea wrote: > Jim, > > Some comments, > > Austin > > -snip- > >> Austin, perhaps if you used engineering measurements for SW results, >> rather than the words like "wizards" and "magic", then the SW might >> have a chance to really improve with each release ? > > > The software group has a very rigorous quality of results metrics > (measurement) system for evaluating their work. I get to use the > superlatives, they do not. Maybe they could ask Ray for some examples, so they can _find_ the cases where the tools go backwards - instead of burying that in the nonsense of statistical averages ? Heck, they might even find that ALL designs benefit from the code cleanup ?! -jgArticle: 93481
news@rtrussell.co.uk wrote: [ ... ] > Given that a lossless system is inevitably 'variable bit rate' > (VBR) the concept of "real time capability" is somewhat vague; > the latency is bound to be variable. The output bit rate will vary, but can be bounded -- for an obvious example, consider using LZW compression with 8-bit inputs and 12-bit codes as the output. In the worst case, each 8-bit input produces a 12-bit output, and you have to clear and rebuild the dictionary every 4096 characters. Real-time constraints are a more or less separate issue though -- here, the output data stream isn't (usually) nearly as difficult to deal with as things like dictionary searching in the compression process. Like the compression rate, this will vary, but (again) it's usually pretty easy to set an upper bound. Using the same example, in LZW you build up a string out of characters, with one dictionary entry for each "next" character following a current character. There can be no more than 256 next characters (worst case), so the worst case requirement is to search all 256 entries for follow-on characters in N microseconds (or nanoseconds, or whatever). You just about need more details to guarantee this though -- a trie-based dictionary has different characteristics than a hash-based dictionary (for an obvious example). In nearly every case it's still pretty easy to place an upper bound on the complexity and time involved though. OTOH, you can run into a little bit of a problem with some of the basics -- if you happen (for example) to be storing your dictionary in SRAM, the time per access is pretty easy to estimate. If you're storing it in something like SDRAM, the worst case can be a bit harder to figure out. -- Later, Jerry.Article: 93482
peter.halford@alarmip.com wrote: > Dear Group, > > I am designing an LCD controller, straight VGA using a 6.5" TFT (60Hz > and a 25MHz dot clock) with a 32MBit SDRAM frame buffer (using two > Xilinx block RAMs as alternate line buffers). Is the SDRAM connected directly on the SP3 ? Could you provide a little more on how things are connected, dataflow, ... > Come on now... Just how difficult can it be? Or so I thought. > > Well, everything works just fine for about 200ms (correct syncs etc...) > and then things go dead for 800ms -- no syncs, nothing. And then things > spring back to life for another 200ms ad nauseam... What in your design can make the sync not happen ? For examples, what if pixel are not fetched in time ? would that stop sync ? For e.g., I know in the last VGA controller I made, nothing excepted reset or a unlocked DCM could make the sync go away ... But that's dependent on how you do things, you should know your design to know what condition can make it stop produce sync. SylvainArticle: 93483
Reza Naima wrote: > John, > > I'm just trying to use the usenet as a resource to learn. In reading > other sources, it seems that I could output a 'z' in order to achieve a > tri-state -- however, as I've found out, there is a lot of strange > behaviour in actually going from verilog syntax to application and > implementation. I wanted to see if there were any gotcha's in this > approach. I also wanted to know if it was possible for a physical > input to read 'z' if I configure the microcontroller's pin as an input > (putting it in tri-state), or if I would have to add another pin (say, > enable_Z) to specify when I want the output to be equal to 'z'. I'm > also not sure about the implementation differences - say, between > xilinx and altera, and if one supports a certain mode of opperation > that the other does not. Turns out that both Altera's and Xilinx' proprietary synthesis tools, as well as Synplify and Mentor Precision, use the same constructs to infer tristate buffers. And it turns out that all four tools have documentation that clearly explains how to infer a tristate! R T F M. Really. > Another aspect of my questions is trying to understand the strange > behaviour. I still don't see why the synthesizer has a problem > equating these two constructs : > > If A do a else do b > If !A do b else do a > > they seem identical to me, and I'll have to do some more research and > re-read some of the replies to this thread to try to figure it out. In the specific case of the reset circuit, you have to realize something. Perhaps the circuit has an active low reset. In that case, the condition if (reset_n) q <= q + 1; else q <= 0; has a different meaning than if (~reset_n) q <= 0; else q <= q + 1; Even though at first glance they appear identical. You must realize that the async reset _has priority_ over the rest of the logic, so you must write your code to reflect that. The synthesis tool helpfully complains if you do it wrong. -aArticle: 93484
mottoblatto@yahoo.com wrote: > "If A do a else do b > If !A do b else do a" > > In regards to the above. These statements make "logical" sense. But > it is BAD design practice when writing RTL code. It JUST is. I have > never heard or read any differently. Put your "reset" condition first, > then follow that with your other conditions. I don't pretend to be an > expert, so if anyone has a different opinion, please let me know. > > I don't know why the synthesizer has a problem with it, PRIORITY!!!! The async reset has priority over the synchronous (clocked) logic. That's why its condition is handled FIRST. -aArticle: 93485
Reza Naima wrote: > - The async verilog code will only be run once every few days, and will > be controlled from a microcontroller that I can tweak to guarantee > sufficient time between each state change. I do have a clock available > that I could feed into it to make it a synchronous design - but I'll go > for the async first to see if I can get it to work. How often the code runs is irrelevant. The question is how long between changes must you wait for all affected logic to settle. > p.s. I've added some extra code (see below) that is synthesized just > fine on one of the xilinx CPLDs, but gives errors if I configure it for > another CPLD types. If you're referring to the code below, I say: Impossible. > It says it can't find a matching template for the > other cpld architectures. I also found that I had to do the posedge > and negedge explicitly. It's perfectly legal to use both sides of the clock for different flip-flops. However, there's no CPLD/FPGA flip-flop that is sensitive to both edges of the clock. > I thought that if I left that out, any state > change for the signal would initiate the block. No -- the only things that "initiate" the block are the signals on the sensitivity list. Which is why a sensitivity list exists. > /* cs toggler - cs goes high for one clock cycle every 17 clk_in > cycles */ > always @(posedge clk_in or negedge clk_in) begin > sck = !sck; > spi_counter <= spi_counter + 1; > if (spi_counter == 16) > cs <= 1; > if (spi_counter == 17) begin > cs <= 0; > spi_counter <= 0; > end > end -aArticle: 93486
dp wrote: > The purpose of high level languages (for logic generation or > writing software) is to allow cheaper programming, I thought it was to allow skilled engineers to accomplish more, to allow re-use and to ease verification. I stand corrected. > the loss > factor I have witnessed has varied between 10 and >1000. You might wish to provide some supporting details. > If one has the resources to do things at a lower level, > this is always the better choice. It does not take longer, > it does not cost more text (except for very low complexity > works where this is a non-issue anyway), it "only" takes > more skills. You are shitting me. Do you think you can implement, say, an ethernet stack in assembly code faster than someone can do the same job in C? By "implement," I don't mean "write a bunch of code." Rather, I mean, "debug and verify." > No translating tool can replace direct access > to the programmed hardware. True, assuming more time is available to do the job. -aArticle: 93487
Ray Andraka wrote: > From what I have seen, folks who use hierarchy generally do a decent > job of it. You really have to work hard at making a hierarchical design > worse than a flat design. Hierarchy puts organization in the design, and > because crossing levels of hierarchy is a little bit painful, it forces > the designer to think in terms of components and to group related stuff > together. Even in a poor example of hierarchy, there is at least a > little bit of grouping done, and therefore information the tools can > use. I and others have been asking for hierarchical tools from Xilinx > for close to 15 years. I honestly don't think Xilinx understands why > using hierarchy is a good thing. C'mon, they don't understand why a hardware engineer would want to use revision control or automated building (Makefiles) for designs. If they did, the tools wouldn't spit files all over the place, and there wouldn't be lossage like one tool requiring the part type given as xc2s100e-ft256-6 and another needing the same info as xc2s100e-6-ft256. Arrrrrgh ... -aArticle: 93488
Melanie Nasic wrote: > Hello community, > > I am thinking about implementing a real-time compression scheme on an FPGA > working at about 500 Mhz. Facing the fact that there is no "universal > compression" algorithm that can compress data regardless of its structure > and statistics I assume compressing grayscale image data. The image data is > delivered line-wise, meaning that one horizontal line is processed, than > the next one, a.s.o. > Because of the high data rate I cannot spend much time on DFT or DCT and on > data modelling. What I am looking for is a way to compress the pixel data in > spatial not spectral domain because of latency aspects, processing > complexity, etc. Because of the sequential data transmission line by line a > block matching is also not possible in my opinion. The compression ratio is > not so important, factor 2:1 would be sufficient. What really matters is the > real time capability. The algorithm should be pipelineable and fast. The > memory requirements should not exceed 1 kb. > What "standard" compression schemes would you recommend? JPEG supports lossless encoding that can fit (at least roughly) within the constraints you've imposed. It uses linear prediction of the current pixel based on one or more previous pixels. The difference between the prediction and the actual value is what's then encoded. The difference is encoded in two parts: the number of bits needed for the difference and the difference itself. The number of bits is Huffman encoded, but the remainder is not. This has a number of advantages. First and foremost, it can be done based on only the curent scan line or (depending on the predictor you choose) only one scan line plus one pixel. In the latter case, you need to (minutely) modify the model you've outlined though -- instead of reading, compressing, and discarding an entire scan line, then starting the next, you always retain one scan line worth of data. As you process pixel X of scan line Y, you're storing pixels 0 through X+1 of the current scan line plus pixels X-1 through N (=line width) of the previous scan line. Another nice point is that the math involved is always simple -- the most complex case is one addition, one subtraction and a one-bit right shift. > Are there > potentialities for a non-standard "own solution"? Yes, almost certainly. Lossless JPEG is open to considerable improvement. Just for an obvious example, it's pretty easy to predict the current pixel based on five neighboring pixels instead of three. At least in theory, this should improve prediction accuracy by close to 40% -- thus reducing the number of bits needed to encode the difference between the predicted and actual values. At a guess, you won't really see 40% improvement, but you'll still see a little improvement. In the JPEG 2000 standard, they added JPEG LS, which is certainly an improvement, but if memory serves, it requires storing roughly two full scan lines instead of roughly one scan line. OTOH, it would be pretty easy to steal some of the ideas in JPEG LS without using the parts that require more storage -- some things like its handling of runs are mostly a matter of encoding that shouldn't really require much extra storage. The final question, however, is whether any of these is likely to give you 2:1 compression. That'll depend in your input data -- for typical photographs, I doubt that'll happen most of the time. For thngs like line art, faxes, etc., you can probably do quite a bit better than 2:1 on a fairly regular basis. If you're willing to settle for nearl lossless compression, you can improve ratios a bit further. -- Later, Jerry.Article: 93489
> You are shitting me. Do you think you can implement, say, an ethernet > stack in assembly code faster than someone can do the same job in C? > By "implement," I don't mean "write a bunch of code." Rather, I mean, > "debug and verify." Faster than C - yes. Assembly - which one do you mean? There are worlds of differences between various assembly languages. I personally use VPA (Virtual Processor Assembly) which I have evolved over the years originating from the 68020 assembly. Today, it may be more appropriate to call it a compiler - whatever you call it, it makes me a lot more efficient than those who try to write the same things in C. About a year ago I did a tcp/ip implementation, it took me < 6 months to do it, starting with ppp all the way to tcp through ip, clean uncompromised code - and including the DNS service, ftp, smtp, various utilities etc. About 150 source files, somewhat less than 2M of text - debugged and everything. It does take advantage of the environment it is runing in, of course, which is written using the same tools or their predescessors (all this on a PPC platform). Does that answer your question? > > The purpose of high level languages (for logic generation or > > writing software) is to allow cheaper programming, > > I thought it was to allow skilled engineers to accomplish more, to > allow re-use and to ease verification. I stand corrected. This is what most people believe - wrongly. > > No translating tool can replace direct access > > to the programmed hardware. > > True, assuming more time is available to do the job. Not necessarily. It does take less time when I am doing the job, sometimes it has taken me less time to develop some tooling and use it to do the job. Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------Article: 93490
Phil Hays wrote: > On Wed, 21 Dec 2005 22:44:22 GMT, "John_H" <johnhandwork@mail.com> > wrote: > >> My opinion is that the process of mapping separate from place & route is >> archaic (to use kind words) and that spreading the logic out so each slice >> has just one LUT is *not* the way to alleviate the problem. > > Yes. Xilinx has added "map -timing" to do just that. Mappping logic > is now with placement, and the result works rather better. <Shrug> I've seen -timing break a design's timing badly. (This was a design that for some reason P&R'd a lot better at an effort level of 'medium' rather than 'high'); JeremyArticle: 93491
Melanie Nasic wrote: > Hello community, > > I am thinking about implementing a real-time compression scheme on an FPGA > working at about 500 Mhz. Facing the fact that there is no "universal > compression" algorithm that can compress data regardless of its structure > and statistics I assume compressing grayscale image data. The image data is > delivered line-wise, meaning that one horizontal line is processed, than > the next one, a.s.o. > Because of the high data rate I cannot spend much time on DFT or DCT and on > data modelling. What I am looking for is a way to compress the pixel data in > spatial not spectral domain because of latency aspects, processing > complexity, etc. Because of the sequential data transmission line by line a > block matching is also not possible in my opinion. The compression ratio is > not so important, factor 2:1 would be sufficient. What really matters is the > real time capability. The algorithm should be pipelineable and fast. The > memory requirements should not exceed 1 kb. > What "standard" compression schemes would you recommend? Though it's only rarely used, there's a lossless version of JPEG encoding. It's almost completely different from normal JPEG encoding. This can be done within your constraints, but would be improved if you can relax them minutely. Instead of only ever using the current scan line, you can improve things if you're willing to place the limit at only ever storing one scan line. The difference is that when you're in the middle of a scan line (for example) you're storing the second half of the previous scan line, and the first half of the current scan line, rather than having half of the buffer sitting empty. If you're storing the data in normal RAM, this makes little real difference -- the data from the previous scan line will remain in memory until you overwrite it, so it's only really a question of whether you use it or ignore it. > Are there potentialities for a non-standard "own solution"? Yes. In the JPEG 2000 standard, they added JPEG LS, which is another lossless encoder. A full-blown JPEG LS encoder needs to store roughly two full scan lines if memory serves, which is outside your constraints. Nonetheless, if you're not worried about following the standard, you could create more or less a hybrid between lossless JPEG and JPEG LS, that would incorporate some advantages of the latter without the increased storage requirements. I suspect you could improve the prediction a bit as well. In essence, you're creating a (rather crude) low-pass filter by averaging a number of pixels together. That's equivalent to a FIR with all the coefficients set to one. I haven't put it to the test, but I'd guess that by turning it into a full-blown FIR with carefully selected coefficients (and possibly using more of the data you have in the buffer anyway) you could probably improve the predictions. Better predictions mean smaller errors, and tighter compression.Article: 93492
<peter.halford@alarmip.com> wrote in message news:1135270697.315589.100480@g44g2000cwa.googlegroups.com... > Dear Group, > > I am designing an LCD controller, straight VGA using a 6.5" TFT (60Hz > and a 25MHz dot clock) with a 32MBit SDRAM frame buffer (using two > Xilinx block RAMs as alternate line buffers). I'm not so sure about LCDs, but one thing I found surprising with a VGA display was that the HSYNC and VSYNC were not sufficient to keep the display active... I needed to drive the blanking controls as well. (You might think that it just wouldn't blank, but it appears as if the SYNC signals are ignored if blanking is not active.) Probably not your problem, but if you're at a complete loss for something to look at, you might give it a try. Good luck. Let us know what it turns out to be.Article: 93493
> However, all I can think is that somehow the BlackFin is resetting the > array. Unfortunately, I didn't write any of the code nor UBOOT, so I > guess tomorrow (they're very small legs and too much CAIR) I will look > at all of the FPGA control signals... Could there be some sort of COP (watchdog) circuit which resets things periodically (because it does not get some service it expects, that is)? Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------Article: 93494
John_H wrote: > [...] > The tools *can* do so much more; the evolutionary development of the tools > has hampered true progress. The silicon is *amazing* in what can be > accomplished. Agreed completely. > "Pushing the rope" to improve results with synthesis is bad > enough. Having place & route software that can't understand what it takes > to produce good results every time is sad. I can pass with total timing > compliance then lose by 1.5 nanoseconds after changing non-critical logic. > I prefer not to curse my tools. Agreed completely (including Ray's points about newer versions not being necessarily better than older ones). To add a few datapoints behind John's comment about losing 1.5 nanoseconds after changing non-critical logic, we have been fighting this type of thing at least every month (often more frequently), for three years now. Always using the latest software version available at the time, in all of our larger and/or higher speed designs (2V3000, 2VP7, 2VP40, and now LX25 designs), we have come to expect that it will take many runs to stumble upon one that meets timing when we make truely trivial changes (I'm talking about things that would make the term "non-critical changes" look like massive changes... stuff like changing a version ID or fixing an inversion problem going to a LUT). I am not kidding or exaggerating in any way when I say that it's an event worthy of commenting on to coworkers and minor celebration when a change is made to one of the above designs and it meets timing the first or second try. That just should not be the case. BTW, the designs were done by different people with different styles. The only thing in common is that they are all hierarchical, and they all tend to use up a fair amount of the device (LUT utilization between 75 and 91%). MarcArticle: 93495
Which one should I choose? from the aspects below:performance,money ,resourse What i can use and so on.thanks!Article: 93496
I want to learn more about this!Article: 93497
"bjzhangwn" <bjzhangwn@163.com> wrote in message news:1135317395.120472.211170@o13g2000cwo.googlegroups.com... >I want to learn more about this! > look at uclinux uclinux on microblaze http://www.itee.uq.edu.au/~jwilliams/mblaze-uclinux uclinux for nios2 http://www.enseirb.fr/~kadionik/embedded/uclinux/nios-uclinux.html http://www.enseirb.fr/~kadionik/embedded/uclinux/HOWTO_compile_uClinux_for_NIOS.html http://linuxdevices.com/news/NS9386138954.html Note I don't know if that is the correct link for Nios as have only used uclinux on microblaze. Can also run full linux on v2pro and v4FX with ppc hard cores. AlexArticle: 93498
Hi Jerry, thanks for your response(s). Sounds quite promising. Do you know something about hardware implementation of the compression schemes you propose? Are there already VHDL examples available or at least C reference models? Regards, Melanie "Jerry Coffin" <jerry.coffin@gmail.com> schrieb im Newsbeitrag news:1135292121.200476.236850@g43g2000cwa.googlegroups.com... > Melanie Nasic wrote: >> Hello community, >> >> I am thinking about implementing a real-time compression scheme on an >> FPGA >> working at about 500 Mhz. Facing the fact that there is no "universal >> compression" algorithm that can compress data regardless of its structure >> and statistics I assume compressing grayscale image data. The image data >> is >> delivered line-wise, meaning that one horizontal line is processed, than >> the next one, a.s.o. >> Because of the high data rate I cannot spend much time on DFT or DCT and >> on >> data modelling. What I am looking for is a way to compress the pixel data >> in >> spatial not spectral domain because of latency aspects, processing >> complexity, etc. Because of the sequential data transmission line by line >> a >> block matching is also not possible in my opinion. The compression ratio >> is >> not so important, factor 2:1 would be sufficient. What really matters is >> the >> real time capability. The algorithm should be pipelineable and fast. The >> memory requirements should not exceed 1 kb. >> What "standard" compression schemes would you recommend? > > JPEG supports lossless encoding that can fit (at least roughly) within > the constraints you've imposed. It uses linear prediction of the > current pixel based on one or more previous pixels. The difference > between the prediction and the actual value is what's then encoded. The > difference is encoded in two parts: the number of bits needed for the > difference and the difference itself. The number of bits is Huffman > encoded, but the remainder is not. > > This has a number of advantages. First and foremost, it can be done > based on only the curent scan line or (depending on the predictor you > choose) only one scan line plus one pixel. In the latter case, you need > to (minutely) modify the model you've outlined though -- instead of > reading, compressing, and discarding an entire scan line, then starting > the next, you always retain one scan line worth of data. As you process > pixel X of scan line Y, you're storing pixels 0 through X+1 of the > current scan line plus pixels X-1 through N (=line width) of the > previous scan line. > > Another nice point is that the math involved is always simple -- the > most complex case is one addition, one subtraction and a one-bit right > shift. > >> Are there >> potentialities for a non-standard "own solution"? > > Yes, almost certainly. Lossless JPEG is open to considerable > improvement. Just for an obvious example, it's pretty easy to predict > the current pixel based on five neighboring pixels instead of three. At > least in theory, this should improve prediction accuracy by close to > 40% -- thus reducing the number of bits needed to encode the difference > between the predicted and actual values. At a guess, you won't really > see 40% improvement, but you'll still see a little improvement. > > In the JPEG 2000 standard, they added JPEG LS, which is certainly an > improvement, but if memory serves, it requires storing roughly two full > scan lines instead of roughly one scan line. OTOH, it would be pretty > easy to steal some of the ideas in JPEG LS without using the parts that > require more storage -- some things like its handling of runs are > mostly a matter of encoding that shouldn't really require much extra > storage. > > The final question, however, is whether any of these is likely to give > you 2:1 compression. That'll depend in your input data -- for typical > photographs, I doubt that'll happen most of the time. For thngs like > line art, faxes, etc., you can probably do quite a bit better than 2:1 > on a fairly regular basis. If you're willing to settle for nearl > lossless compression, you can improve ratios a bit further. > > -- > Later, > Jerry. >Article: 93499
"Alex Gibson" <news@alxx.org> schrieb im Newsbeitrag news:411umcF1cgt5hU1@individual.net... > > "bjzhangwn" <bjzhangwn@163.com> wrote in message > news:1135317395.120472.211170@o13g2000cwo.googlegroups.com... >>I want to learn more about this! >> > > look at uclinux > > uclinux on microblaze > http://www.itee.uq.edu.au/~jwilliams/mblaze-uclinux > > uclinux for nios2 > http://www.enseirb.fr/~kadionik/embedded/uclinux/nios-uclinux.html > http://www.enseirb.fr/~kadionik/embedded/uclinux/HOWTO_compile_uClinux_for_NIOS.html > > http://linuxdevices.com/news/NS9386138954.html > > Note I don't know if that is the correct link for Nios as have only used > uclinux on microblaze. > > Can also run full linux on v2pro and v4FX with ppc hard cores. > > Alex > Alex, you as many others are saying that 'can run full linux' on V2P/V4, but well I belive that and I do know it works, but there seems to be only one way to the goal, namly montavista trying the full linux on virtex by using EDK and opensource linux kernel seems to be a real pain, and yes I have studied all the info about this on the web, all references go back to montavista, or V2PDK. So maybe you know where to get HOWTO EDK+opensource ppc-linux, all done in one day? I know that Denx did a lot of work for V2Pro ppc linux, but as Xilinx did not talk the them then they got really pissed off and stopped their efforts for the Virtex PPC linux support. And montavista is something very strange type of entity, can not understand what they are selling or offering or what it costs :( Antti
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z