Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
<nmm1@cam.ac.uk> wrote: +--------------- | For example, there are people starting to think about genuinely | unreliable computation, of the sort where you just have to live | with ALL parths being unreliable. After all, we all use such a | computer every day .... +--------------- Yes, there are such people, those in the Computational Complexity branch of Theoretical Computer Science who are working on bounded-error probabilistic classes, both classical & in quantum computing: http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial Bounded-error probabilistic polynomial In computational complexity theory, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of at most 1/3 for all instances. ... http://en.wikipedia.org/wiki/BQP BQP In computational complexity theory BQP (bounded error quantum polynomial time) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue of the complexity class BPP. ... Though the math seems to be way ahead of the hardware currently... ;-} -Rob ----- Rob Warnock <rpw3@rpw3.org> 627 26th Avenue <http://rpw3.org/> San Mateo, CA 94403Article: 152601
In article <rqSdnb7HzcZh5OnTnZ2dnUVZ_umdnZ2d@speakeasy.net>, Rob Warnock <rpw3@rpw3.org> wrote: > >+--------------- >| For example, there are people starting to think about genuinely >| unreliable computation, of the sort where you just have to live >| with ALL parths being unreliable. After all, we all use such a >| computer every day .... >+--------------- > >Yes, there are such people, those in the Computational Complexity >branch of Theoretical Computer Science who are working on bounded-error >probabilistic classes, both classical & in quantum computing: Please don't associate what I say with the auto-eroticism of those lunatics. While there may be some that are better than that, I have seen little evidence of it in their papers. The work that I am referring to almost entirely either predates computer scientists or is being done a long way away from that area. > http://en.wikipedia.org/wiki/Bounded-error_probabilistic_polynomial > Bounded-error probabilistic polynomial > In computational complexity theory, bounded-error probabilistic > polynomial time (BPP) is the class of decision problems solvable > by a probabilistic Turing machine in polynomial time, with an error > probability of at most 1/3 for all instances. The fundamental mathematical defects of that formulation are left as an exercise for the reader. Hint: if you are a decent mathematical probabilist, they will jump out at you. > http://en.wikipedia.org/wiki/BQP > BQP > In computational complexity theory BQP (bounded error quantum > polynomial time) is the class of decision problems solvable by > a quantum computer in polynomial time, with an error probability > of at most 1/3 for all instances. It is the quantum analogue of > the complexity class BPP. > >Though the math seems to be way ahead of the hardware currently... ;-} And the mathematics is itself singularly unimpressive. Regards, Nick Maclaren.Article: 152602
<rupertlssmith@googlemail.com> wrote in message news:da9aad66-de65-4500-867e-702b2a23e665@m5g2000vbm.googlegroups.com... > Hi, > > I'm looking for a Xilinx Virtex 6 based dev. board, (with PCIe and SFP > connectors for 10G Ethernet). Other than Hitech Global, what other > suppliers are there? You suggestions are much appreciated. Thanks. Avnet?Article: 152603
Synthesis optimization people seem to like registers at I/O. Particularly, in Xilinx manual: "The synthesis tools will not optimize across the Partition interface. If an asynchronous timing critical path crosses Partition boundaries, logic optimizations will not occur across the Partition boundary. To mitigate this issue, add a register to the asynchronous signal at the Partition boundary." I like the registers all over design. Though, they speak like it is game inject a register in arbitrary place.Article: 152604
In article <4E74E439.4000107@bitblocks.com>, Bakul Shah <usenet@bitblocks.com> wrote: > >> Despite a lot of effort over the years, nobody has ever thought of >> a good way of abstracting parallelism in programming languages. > >CSP? That is a model for describing parallelism of the message-passing variety (including the use of Von Neumann shared data), and is in no reasonable sense an abstraction for use in programming languages. BSP is. Unfortunately, it is not a good one, though I teach and recommend that people consider it :-( Regards, Nick Maclaren.Article: 152605
On Sep 17, 10:44=A0am, valtih1978 <d...@not.email.me> wrote: > Synthesis optimization people seem to like registers at I/O. > Particularly, in Xilinx manual: > > =A0 =A0"The synthesis tools will not optimize across the Partition > interface. If an asynchronous timing critical path crosses Partition > boundaries, logic optimizations will not occur across the Partition > boundary. To mitigate this issue, add a register to the asynchronous > signal at the Partition boundary." > > I like the registers all over design. Though, they speak like it is game > inject a register in arbitrary place. In order to have reliable (deterministic and short) timing at the IO boundaries you need to have registers in the IO. The other comment that you referred to is also a very good practice. Registering an asynchronous input at the boundary will resolve the asynchronous event to a single clock edge within the module (metastability concerns aside) so that timing analysis can be done correctly and so that all parts of the module will "see" the same value. Since this is an asynchronous signal and by its definition can happen at any time adding a register has no practical impact. These are not absolute rules. You are free to create your design in any way that you see fit, but when the design isn't stable and reliable you should remember these design tips.Article: 152606
On 9/17/11 1:44 AM, nmm1@cam.ac.uk wrote: > In article<270b4f6b-a8e8-4af0-bf4a-c36da1864692@u19g2000vbm.googlegroups.com>, > Despite a lot of effort over the years, nobody has ever thought of > a good way of abstracting parallelism in programming languages. CSP?Article: 152607
On 9/17/11 10:55 AM, nmm1@cam.ac.uk wrote: > In article<4E74E439.4000107@bitblocks.com>, > Bakul Shah<usenet@bitblocks.com> wrote: >> >>> Despite a lot of effort over the years, nobody has ever thought of >>> a good way of abstracting parallelism in programming languages. >> >> CSP? > > That is a model for describing parallelism of the message-passing > variety (including the use of Von Neumann shared data), and is in > no reasonable sense an abstraction for use in programming languages. I have not seen anything as elegant as CSP & Dijkstra's Guarded commands and they have been around for 35+ years. But perhaps we mean different things? I am talking about naturally parallel problems. Here is an example (the first such problem I was given in an OS class ages ago): S students, each has to read B books in any order, the school library has C[i] copies of the ith book. Model this with S student processes and a librarian process! As you can see this is an allegory of a resource allocation problem. It is easy to see how to parallelize an APL expression like "F/(V1 G V2)", where scalar functions F & G take two args. [In Scheme: (vector-fold F (vector-map G V1 V2))]. You'd have to know the properties of F & G to do it right but potentially this can be compiled to run on N parallel cores and these N pieces will have to use message passing. I would like to be able to express such decomposition in the language itself. So you will have to elaborate why and how CSP is not a reasonable abstraction for parallelism. Erlang, Occam & Go use it! Go's channels and `goroutines' are easy to use.Article: 152608
In article <4E74F69C.5080009@bitblocks.com>, Bakul Shah <usenet@bitblocks.com> wrote: >>> >>>> Despite a lot of effort over the years, nobody has ever thought of >>>> a good way of abstracting parallelism in programming languages. >>> >>> CSP? >> >> That is a model for describing parallelism of the message-passing >> variety (including the use of Von Neumann shared data), and is in >> no reasonable sense an abstraction for use in programming languages. > >I have not seen anything as elegant as CSP & Dijkstra's >Guarded commands and they have been around for 35+ years. Well, measure theory is also extremely elegant, and has been around for longer, but is not a usable abstraction for programming. >But perhaps we mean different things? I am talking about >naturally parallel problems. Here is an example (the first >such problem I was given in an OS class ages ago): S students, >each has to read B books in any order, the school library has >C[i] copies of the ith book. Model this with S student >processes and a librarian process! As you can see this is >an allegory of a resource allocation problem. Such problems are almost never interesting in practice, and very often not in theory. Programming is about mapping a mathematical abstraction of an actual problem into an operational description for a particular agent. Perhaps the oldest and best established abstraction for programming languages is procedures, but array (SIMD) notation and operations are also ancient, and are inherently parallel. However, 50 years of experience demonstrates that they are good only for some kinds of problem and types of agent. Regards, Nick Maclaren.Article: 152609
You speak like about primary I/O. Yet, partitions are blocks of the same FPGA design. They are under full control of the tools. Do you mean that partitions treated as designs, absolutely external to each other? Thank you.Article: 152610
I would like to know if there is a development kit and documentation available to digitize the VGA siganls to create the a digital video fram using FPGA. I see lot of fpga applications that generate VGA output but I am looking for an application that can take VGA input. This may require external A/D converters. Thanks.Article: 152611
"Test01" <cpandya@yahoo.com> wrote in message news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com... >I would like to know if there is a development kit and documentation > available to digitize the VGA siganls to create the a digital video > fram using FPGA. > > I see lot of fpga applications that generate VGA output but I am > looking for an application that can take VGA input. This may require > external A/D converters. > > Thanks. Check here: http://www.analog.com/en/analog-to-digital-converters/video-decoders/products/index.html http://focus.ti.com/paramsearch/docs/parametricsearch.tsp?family=analog&familyId=375&uiTemplateId=NODE_STRY_PGE_TArticle: 152612
On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote: > "Test01" <cpan...@yahoo.com> wrote in message > > news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com... > > >I would like to know if there is a development kit and documentation > > available to digitize the VGA siganls to create the a digital video > > fram using FPGA. > > > I see lot of fpga applications that generate VGA output but I am > > looking for an application that can take VGA input. =A0This may require > > external A/D converters. > > > Thanks. > > Check here:http://www.analog.com/en/analog-to-digital-converters/video-de= coders/...http://focus.ti.com/paramsearch/docs/parametricsearch.tsp?family= =3Danal... Thanks for the link. More importantly, is there FPGA development board with this analog chips that I can use for development purposes? Ideally I am looking for a board that can take in 15 pin VGA connector as input. This iis analog RGB signs get converted into 24 bit digital output and is fed to the FPGA with DDR3 interface.Article: 152613
On Sep 18, 1:06=A0pm, Test01 <cpan...@yahoo.com> wrote: > On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote: > > > "Test01" <cpan...@yahoo.com> wrote in message > > >news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com... > > > >I would like to know if there is a development kit and documentation > > > available to digitize the VGA siganls to create the a digital video > > > fram using FPGA. > > > > I see lot of fpga applications that generate VGA output but I am > > > looking for an application that can take VGA input. =A0This may requi= re > > > external A/D converters. > > > > Thanks. > > > Check here:http://www.analog.com/en/analog-to-digital-converters/video-= decoders/...... > > Thanks for the link. =A0More importantly, is there FPGA development > board with this analog chips that I can use for development purposes? > > Ideally I am looking for a board that can take in 15 pin VGA connector > as input. =A0This iis analog RGB signs get converted into 24 bit digital > output and is fed to the FPGA with DDR3 interface. There hardware from bittec/altera at this link that takes in DVI 29 pin connector. It seems to include analog RGB signal inputs also. So in that case DVI connector is superset of VGA connector And this particular board can digitize the VGA video. In other words I should be able to use this board as a reference board. Does that make sense? http://www.bitec.ltd.uk/hsmc_dvi_1080p_csc_c120.pdf Thanks for your help.Article: 152614
Jon Elson wrote: Hmmm, one additional tidbit. Some boards reflowed at the same time have been stored in a lab environment. These boards in question were stored in my basement for six months. The lab env. boards show no sign of the whiskers. Conditions in my basement are not bad at all, but it is likely more humid down there than in the lab. So, I guess this means don't store lead-free boards in humid conditions. JonArticle: 152615
Jon Elson <elson@pico-systems.com> wrote: >Jon Elson wrote: > >Hmmm, one additional tidbit. Some boards reflowed at the >same time have been stored in a lab environment. These boards >in question were stored in my basement for six months. The lab env. boards >show no sign of the whiskers. Conditions in my basement are >not bad at all, but it is likely more humid down there than >in the lab. So, I guess this means don't store lead-free >boards in humid conditions. IMHO this is the wrong solution. Actually it is not a solution at all. You really should get in touch with someone who has experience in this field in order to solve the problem at the root. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... nico@nctdevpuntnl (punt=.) --------------------------------------------------------------Article: 152616
On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote: > Despite a lot of effort over the years, nobody has ever thought of a > good way of abstracting parallelism in programming languages. That's not really all that surprising though, is it? Hardware that exhibits programmable parallelism has taken many different forms over the years, especially with many different scales of granularity of the parallelisable sequential operations and inter-processor communications, The entire issue of parallelism is essentially orthogonal to the sequential Turing/von-Neuman model of computation that is at the heart of most programming languages. It's not obvious (to me) that a single language could reasonably describe a problem and have it map efficiently across "classical" cross-bar shared memory systems (including barrel processors), NuMA shared memory, distributed shared memory, clusters, and clouds (the latter just an example of the dynamic resource count vs known- at-compile-time axis) all of which incorporate both sequential and vector (and GPU-style) resources. Which is not to say that such a thing can't exist. My expectation is that it will wind up being something very functional in shape that relaxes as many restrictions on order-of-execution as possible (including order of argument evaluation), sitting on top of a dynamic execution environment that can compile and re-compile code and shift it around in the system to match the data that is observed at run-time. That is: the language can't assume a Turing model, but rather a more mathematical or declarative one. The compiler has to choose where sequential execution can be applied, and where that isn't appropriate. Needless to say, we're not there yet, but I expect to see it in the next dozen or so years. Cheers, -- AndrewArticle: 152617
In comp.arch.fpga Andrew Reilly <areilly---@bigpond.net.au> wrote: > On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote: >> Despite a lot of effort over the years, nobody has ever thought of a >> good way of abstracting parallelism in programming languages. > That's not really all that surprising though, is it? Hardware that > exhibits programmable parallelism has taken many different forms over the > years, especially with many different scales of granularity of the > parallelisable sequential operations and inter-processor communications, Yes, but programs tend to follow the mathematics of matrix algebra. A language that allowed for parallel processing of matrix operations, independent of the underlying hardware, should help. Note that both the PL/I and Fortran array assignment complicate parallel processing. In the case of overlap, where elements changed in the destination can later be used in the source, PL/I requires that the new value be used (as if processed sequentially), where Fortran requires that the old value be used (a temporary array may be needed). The Fortran FORALL conveniently doesn't help much. A construct that allowed the compiler (and parallel processor) to do the operations in any order, including a promise that no aliasing occurs, and that no destination array elements are used in the source, would, it seems to me, help. Maybe even an assignment construct that allowed for a group of assignments (presumably array assignments) to be executed, allowing the compiler to do them in any order, again guaranteeing no aliasing and no element reuse. > The entire issue of parallelism is essentially orthogonal to the > sequential Turing/von-Neuman model of computation that is at the heart of > most programming languages. It's not obvious (to me) that a single > language could reasonably describe a problem and have it map efficiently > across "classical" cross-bar shared memory systems (including barrel > processors), NuMA shared memory, distributed shared memory, clusters, and > clouds (the latter just an example of the dynamic resource count vs known- > at-compile-time axis) all of which incorporate both sequential and vector > (and GPU-style) resources. Well, part of it is that we aren't so good at thinking of problems that way. Us (people) like to think things through one step at a time, and von-Neumann allows for that. > Which is not to say that such a thing can't exist. My expectation is > that it will wind up being something very functional in shape that > relaxes as many restrictions on order-of-execution as possible (including > order of argument evaluation), sitting on top of a dynamic execution > environment that can compile and re-compile code and shift it around in > the system to match the data that is observed at run-time. > That is: the language can't assume a Turing model, but rather a more > mathematical or declarative one. The compiler has to choose where > sequential execution can be applied, and where that isn't appropriate. > Needless to say, we're not there yet, but I expect to see it in the next > dozen or so years. In nuclear physics there is a constant describing the number of years until viable nuclear fusion power plants can be built. It is a constant in that it seems to always be (about) that many years in the future. (I believe it is about 20 or 30 years, but I can't find a reference.) I wonder if this dozen years is also a constant. People have been working on parallel programming for years, yet usable programming languages are always in the future. -- glenArticle: 152618
> Avnet? The Avnet-designed Virtex-6 LX130T Evaluation Kit is no longer available. The ML605 has PCIe and SFP. www.xilinx.com/ml605 BryanArticle: 152619
On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote: > In article<4E74F69C.5080009@bitblocks.com>, > Bakul Shah<usenet@bitblocks.com> wrote: >> >> I have not seen anything as elegant as CSP& Dijkstra's >> Guarded commands and they have been around for 35+ years. > > Well, measure theory is also extremely elegant, and has been around > for longer, but is not a usable abstraction for programming. Your original statement was > Despite a lot of effort over the years, nobody has ever thought of > a good way of abstracting parallelism in programming languages. I gave some counter examples but instead of responding to that, your bring in some random assertion. If you'd used Erlang or Go and had actual criticisms that would at least make this discussion interesting. Ah well.Article: 152620
Nico Coesel wrote: > > IMHO this is the wrong solution. Actually it is not a solution at all. > You really should get in touch with someone who has experience in this > field in order to solve the problem at the root. > You have to understand this is a REALLY small business. I have an old Philips pick & place machine in my basement, and reflow the boards in a toaster oven, with a thermocouple reading temperature of the boards. I can't afford to have a $3000 a day consultant come in, and they'd just laugh when they saw my equipment. I could go to an all lead-free process, but these boards have already been made with plain FR-4 and tin-lead finish. As for getting tin/lead parts, that is really difficult for a number of the components. And, I STILL don't know why this ONE specific part is the ONLY one to show this problem. I use a bunch of other parts from Xilinx with no whiskers, as well as from a dozen other manufacturers. JonArticle: 152621
On 9/18/2011 9:03 PM, glen herrmannsfeldt wrote: > In comp.arch.fpga Andrew Reilly<areilly---@bigpond.net.au> wrote: >> On Sat, 17 Sep 2011 09:44:35 +0100, nmm1 wrote: > >>> Despite a lot of effort over the years, nobody has ever thought of a >>> good way of abstracting parallelism in programming languages. > >> That's not really all that surprising though, is it? Hardware that >> exhibits programmable parallelism has taken many different forms over the >> years, especially with many different scales of granularity of the >> parallelisable sequential operations and inter-processor communications, > > Yes, but programs tend to follow the mathematics of matrix algebra. > Spoken like someone who would know the difference between covariant and contravariant and wouldn't blink at a Christoffel symbol. This is the "crystalline" memory structure that has so obsessed me. All of the most powerful mathematical disciplines would at one time have fit pretty well into this paradigm. As Andy Glew commented, after talking to some CFD people, maybe the most natural structure is not objects like vectors and tensors, but something far more general. Trees (graphs) are important, and they can express a much more general class of objects than mutlidimensional arrays. The generality has an enormous price, of course <snip> > > I wonder if this dozen years is also a constant. People have been > working on parallel programming for years, yet usable programming > languages are always in the future. > At least one and possibly more generations will have to die off. At one time, science and technology progressed slowly enough that the tenure of senior scientists and engineers was not an obvious obstacle to progress. Now it is. Robert.Article: 152622
On Sun, 18 Sep 2011 18:26:45 -0700, Bakul Shah wrote: > On 9/18/11 12:38 AM, nmm1@cam.ac.uk wrote: >> In article<4E74F69C.5080009@bitblocks.com>, >> Bakul Shah<usenet@bitblocks.com> wrote: >>> >>> I have not seen anything as elegant as CSP& Dijkstra's Guarded >>> commands and they have been around for 35+ years. >> >> Well, measure theory is also extremely elegant, and has been around for >> longer, but is not a usable abstraction for programming. > > Your original statement was > > Despite a lot of effort over the years, nobody has ever thought of a > > good way of abstracting parallelism in programming languages. > > I gave some counter examples but instead of responding to that, your > bring in some random assertion. If you'd used Erlang or Go and had > actual criticisms that would at least make this discussion interesting. > Ah well. I've read the language descriptions of Erlang and Go and think that both are heading in the right direction, in terms of practical coarse-grain parallelism, but I doubt that there is a compiler (for any language) that can turn, say, a large GEMM or FFT problem expressed entirely as independent agents or go-routines (or futures) into cache-aware vector code that runs nicely on a small-ish number of cores, if that's what you happen to have available. It isn't really a question of language at all: as you say, erlang, go and a few others already have quite reasonable syntaxes for independent operation. The problem is one of compilation competence: the ability to decide/adapt/guess vast collections of nominally independent operations into efficient arbitrarily sequential operations, rather than putting each potentially-parallel operation into its own thread and letting the operating system's scheduler muddle through it at run-time. Cheers, -- AndrewArticle: 152623
On 16 Sep., 21:14, Jim <james.kn...@gmail.com> wrote: > > Hi Jim, > > using clock enables for multirate systems is a proper way, but you are > > trying to do it unneccessary complicated. > > It is much simpler. > > > You have a master clock, and a counter that provides the neccessary > > frequency division. > > So far so good. > > Now you only need to create an impulse for a single clock period. > > This can be done like this: > > > =A0 =A0 clock_divider_counter_proc: process (reset, clock) > > =A0 =A0 begin > > =A0 =A0 =A0 =A0 if reset =3D '1' then > > =A0 =A0 =A0 =A0 =A0 =A0 count <=3D (others =3D> '0'); > > =A0 =A0 =A0 =A0 =A0 =A0 clock_divided_i <=3D '0'; > > =A0 =A0 =A0 =A0 elsif rising_edge(clock) then > > =A0 =A0 =A0 =A0 =A0 =A0 count <=3D count + 1; > > =A0 =A0 =A0 =A0 =A0 =A0 -- clock_divided_i <=3D count(4); -- this would= be too long > > =A0 =A0 =A0 =A0 =A0 =A0 if count =3D "11111" then > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0clock_enable_clock_divided =A0<=3D 1; > > =A0 =A0 =A0 =A0 =A0 =A0else > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 clock_enable_clock_divided =A0<=3D 0; > > =A0 =A0 =A0 =A0 =A0 =A0end if; > > =A0 =A0 =A0 =A0 end if; > > =A0 =A0 end process; > > end behavioral; > > > That's all you need. > > When assigning to =A0clock_enable_clock_divided the clock to output > > delay and routing delay are sufficient to > > keep the signal valid beyond the next rising clock edge. > > (If that wouldn't work this way pipelining data from one register to > > the next wouldn't work too, but it does.) > > > Have a nice synthesis > > =A0 =A0Eilert > > Eilert, > > Thanks for the quick response. =A0After I posted, I read that FPGAs > typically have 0 hold times so your approach seems great. =A0Thanks for > the help. Hi Jim, well, it should be good, because it's been recommended in some XILINX papers and used in their System Generator tool as the default method for multirate systems. :-) Have a nice synthesis EilertArticle: 152624
On 18 Sep., 20:06, Test01 <cpan...@yahoo.com> wrote: > On Sep 18, 10:57=A0am, "scrts" <mailsoc@[remove@here]gmail.com> wrote: > > > "Test01" <cpan...@yahoo.com> wrote in message > > >news:b686ac06-9a75-4bd4-bab1-4f33d0636afd@w8g2000yqi.googlegroups.com... > > > >I would like to know if there is a development kit and documentation > > > available to digitize the VGA siganls to create the a digital video > > > fram using FPGA. > > > > I see lot of fpga applications that generate VGA output but I am > > > looking for an application that can take VGA input. =A0This may requi= re > > > external A/D converters. > > > > Thanks. > > > Check here:http://www.analog.com/en/analog-to-digital-converters/video-= decoders/...... > > Thanks for the link. =A0More importantly, is there FPGA development > board with this analog chips that I can use for development purposes? > > Ideally I am looking for a board that can take in 15 pin VGA connector > as input. =A0This iis analog RGB signs get converted into 24 bit digital > output and is fed to the FPGA with DDR3 interface. Hi, there are some Boards available from Xilinx. e.g. the ML506. This board provides the 15pin VGA in and a DVI-I for output purposes (Analog & Digital). Of course, this board is not cheap. Have a nice synthesis Eilert
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z