Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 107250

Article: 107250
Subject: How to change the font size in text editor of modelsim
From: "fl" <rxjwg98@gmail.com>
Date: 25 Aug 2006 13:29:55 -0700
Links: << >>  << T >>  << A >>
Hi,
I am using Modelsim 6.2e with the Xilinx webpack 8.2. When I print the
vhdl text from the text editor of Modelsim, the font size is very big.
How to modify the font size in the printed paper? I do not find such
dialogue box. 

Thank you very much.


Article: 107251
Subject: Re: FPGA -> SATA?
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 25 Aug 2006 13:30:13 -0700
Links: << >>  << T >>  << A >>
Martin,

SATA worked, but not when it used the spread spectrum clocking.  There
was also some out of band signaling issues, where you needed a
transistor and a couple of resistors.

So, it could be a point solution for a known drive that did not have
spread spectrum, but it was not able to deal with the the broad spectrum
of SATA product.

Austin

Antti wrote:
> Martin E. schrieb:
> 
>> I am looking for a way to read/write to a SATA drive from an FPGA.  I've
>> looked around.  Nothing seems to fit the bill.  Any ideas worth considering?
>>
>> Thanks,
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Martin
> Hi Martin,
> 
> good question :)
> 
> ML300 and digilent XUP V2Pro both have SATA connectors on
> them but can not actually be used for SATA as of compliance issues.
> (OOB and CDR lock range mainly)
> 
> ASFAIK those issues are no longer present with V4FX that
> should be fully SATA compliant without external workarounds.
> So you can just get the ML410 and start working :)
> 
> sure you would still need the IP core from some vendor though
> 
> Antti
> 

Article: 107252
Subject: Re: fastest FPGA
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 13:41:52 -0700
Links: << >>  << T >>  << A >>

Tommy Thorn wrote:
> fpga_toys@yahoo.com wrote:
> > There are other posters that post from multiple email addresses, often
> > work/home, some with different username/handles at the different sites.
> > Not all export their fully legal name for both.
>
> I have multiple accounts myself, but I list my name in all. Frankly I
> find it childish to make up new names for yourself. Always makes me
> wonder what you're trying to hide.

Clearly not hiding ... as I've been clear in using both handles that
I'm also the primarly developer for FpgaC. As for the choice of a
handle, Totally_Lost has always been useful in some threads

Some posters here state things strongly with the reputation of their
name, knowing that few will question their position simply from their
reputation, accepting whatever crap as gospel.

Totally_Lost has exactly the opposite effect, doesn't matter what
position I take, somebody will step up to the plate, and with strong
moral authority refute it assuming I'm clueless and lost.

Some discussions that side effect, from the choice of handle is useful,
and in others it would detract from the quality of the discussion by
prompting unnecessary flame wars.

> As for your use of handles elsewhere, why exactly would we know or
> care?

that should always be true, and we should never see shit heads jumping
on a poster just because they think the poster is helpless.

On the other hand, we have human nature at it's worst ....


Article: 107253
Subject: Re: Style of coding complex logic (particularly state machines)
From: "KJ" <Kevin.Jennings@Unisys.com>
Date: 25 Aug 2006 13:43:11 -0700
Links: << >>  << T >>  << A >>
Rather than posting code, I'll refer to yours since it is roughly along
the lines of what I do.  Instead I'll hope that my explanation is clear
enough that one can follow my reasoning (whether you agree or disagree
with it) without any more than occasional snippets of code.

First off, I don't make any religious distinction between 'state
machine' signals and 'output' signals so I wouldn't feel compelled to
have a separate process for outputs, so I might choose to simply
combine them into a single process.  The advantage (IMO):  generally
less code, somewhat more readable and maintainable since in many cases,
it is much easier to follow the logic that says "if x then goto this
state and set this output to this value", end of story.

Having said that though, I do tend to have multiple clocked processes.
I base what goes into each process on the somewhat fuzzy definition of
what things are in some sense 'related'.  Things to me are 'related' if
I'm replicating code to implement them in separate processes.  An
example would be if I have three signals A, B, C that all are of the
form "if (x = '1') then.... else.... end if;" then I would most likely
have A, B, C in a single process.  Of course A, B, C being different
would have some additional logic associated uniquely with them so
within the overall "if (x = '1')...else...end if;" statement there
would be additional logic 'if', 'case', whatever that go into defining
them.

Outputs that depend on 'next' state will tend to get implemented in
with the state machine for the simple reason that they meet the
'related' criteria.  Outputs that depend on current state will tend to
get implemented elsewhere because they are not 'related'.  Again, no
heartburn here because I'm being pragmatic rather than dogmatic about
source code positioning, I let the relationships drive how it appears.
This tends to produce more robust code (IMO) since there tends to be
less duplicated logic that will over time start to diverge because
something changed 'up there' but forgot to be changed 'down there'.  By
physically grouping related things, it is easier to see implications of
the change I'm contemplating on other related signals and whether there
is a relationship that should be maintained or severed somewhat.

I then try to balance that out with the again somewhat fuzzy term of
'readability'.  A single process of 1000 lines of anything to me is too
long, I aim for it to fit on a screen....maybe one with somewhat high
resolution but that's the basic idea.  Scrolling back and forth while
you're trying to understand code is not productive and is disruptive I
think.

Another criteria I use for whether things should be together in a
single process is the number of signals going in and out of that
process.  I happen to really like the Modelsim 'Dataflow' window and
how it integrates with the source and wave windows so that as I'm
debugging I can immediately see the inputs that go into producing the
one signal that I'm moseying through in order to find the root cause of
whatever it is I'm debugging.  The single monolithic process that has
100 inputs and 100 outputs will show up as just a large block with all
those I/O when I click on it.  But if the equation is simply A <= B and
C and is implemented in a 'screen sized' process then the dataflow
window shows me that C depends on A and B and possibly a few other
inputs that might happen to coincidentally be in that process because
other signals that use them were deemed 'related' and it will jumps me
write to the correct lines of code that implement the logic (because
that process fits on a screen) where I immediately see that A depends
only on B and C.  You lose all of that as you put more and more things
into a process.

If either the 'lines that fit on the screen test' or the 'number of
signals in and out test' seem to be getting out of hand (again, the
fuzzy definition) I'll revist just how 'related' these things really
are.  Signals are 'more' related if separating them meant they would
share more replicated code.  An example here would be simply a process
with a bunch of signals that are all clocked, but they all share a
common clock enable so they are of the form "if (Clock_Enable = '1')
then....end if;" so those signals are 'related' by my definition, they
do share the "if (Clock_Enable = '1') then....end if;" construct.  But
if that's about it then I would have no heartburn about making two (or
more) processes replicating the "if (Clock_Enable = '1') then....end
if;" construct to appease the 'fitting on a screen' and 'number of I/O'
tests.

I feel free to violate the 'screen size' rule in favor of the 'related'
rule if the situation dictates and have the multi-screen process.

In spite of the multiple clocked processes I consider myself brethen of
the 'one process' state machine group because my multiple clocked
processes are just one virtual clocked process, they are not the
combinatorial 'next state' process feeding into the clocked process.

Combinatorial logic is implemented using concurrent statements outside
the process.  When implemented inside the process I have to think too
hard and scroll back to remember if the usage of variable 'x' in this
particular case is the input or the output of the flop.  Call me lazy
on this one if you want.

I only use variables like C language macros.  In other words if I want
a shorthand way of referring to a hunk of logic, I'll define the
equation for the variable and it will almost always be right at the
very top of the process, then I'll use it wherever....and that would
only be because for some reason it wasn't looking right as a
combinatorial function concurrent statement for some reason.  Variables
when added to the Modelsim wave window do not show any of the signal
history, signals do not have that limitation.  If it's a 'simple'
function that is being implemented by the variable then this is not a
big deal, if the function being implemented is rather tricky then being
able to display the history can be very important.  If I don't, then I
have to restart the simulation adding the variables to the wave window
before I say 'run'....wasted time...and if those variables then lead me
back to another entity with signals and different variables, the
signals I can wave, the variables...well, restart the sim again.

The drawback of signals is that take longer simulation time...wasted
time too.  I'm trying to resurrect the test code that I had comparing
use of variables versus signals but I seem to remember about a 10% hit
for signals.  But I still use signals because just one blown simulation
that needs to be restarted just to get the variable's history can more
than compensate for that 10%...which for someone picking up somebody
elses code can easily happen since they are not familiar with the code
to begin with to 'know' which variables to wave....in other words
'supportability'.  I try to give the poor shmuck who has to pick up my
code all the help I can...even if it means they're sitting waiting for
an extra 10% ;)

The variable people have a definite point about simulation time, but
there is really no good data to support the overall debug cycle time
being in any way better using variables.  They seem to imply that they
can run 10% more test cases, but it is less than that if they were to
consider the down sides and the probabilities of them occurring (see
above about having to restart...or extra time pondering what they think
the value of the variable is in their head since they can't wave it
without restarting).  They still might come out ahead using variables
(and I might too if I did that, one day I might, they do have a point).

I rarely (veeeeeeeery rarely) use combinatorial processes.  In fact, I
can't remember the last time I did but I'm pretty sure at some point I
did but even there I'm pretty sure that the sensitivity list consisted
of only one or two signals.

I never have sensitivity list issues (see above paragraph).

I never have combinatorial latches (ditto).

I never use async resets with the exception of the flip flop that
receives the external reset signal that is the start of a shift chain
for developing my internal design reset.

I never have issues with some clocked signals getting cleared and
others not or going to unexpected states (see above paragraph).  I have
however fixed several designs that did use asynchronous resets
inappropriately both on a board and within programmable logic.

I don't recall ever having to fix reset issues on others designs when
synchronous resets were used...hmm, well maybe I've just lived in a
narrow design world.

Even in a gated clock design I have not run across the need for the
async reset anywhere other than that first flip flop previously
mentioned.  Go figure.

I prefer executable code over comments (but I certainly do appreciate
the comments).

I use the 'time' data type in synthesizable code.  No seriously, I do
and for very good reason....you know, the specification we all run into
at some point that says that signal 'x' must be asserted for 2 us...and
let's see my clock is 20 ns, no problem, figure out the proper count
values and go on....then two years down the road version 2.0 with the
speedup, now we can run with a 15 ns clock....and now you have a 1.5 us
pulse....DOH!!...I don't have that problem (anymore) because I use type
'time'....shamelessly leaving a cliff hanger on this one for those that
haven't figured out how I use 'time' types in synthesizable code.

Later posts on this topic talked about automatically generated code
from using a particular form of a template.  I couldn't care less what
format the auto code generator use since that will not be the 'source',
the inputs to that code generator are the source and is where I'll go
for more information.  If I have to dig into auto generated code to
find a problem, I will, but somebody is going to have a newly opened
service request to answer for my troubles if I find a problem.
Templates that are intended for people to use should be done with
people in mind, not a code generator.

I love to write non-synthesizable testbench code too as well...where I
shamelessly break just about every rule I mentioned above if needed.

I have a tendency to ramble on at times.

KJ


Article: 107254
Subject: Re: I2C on Xilinx Virtex-4/ML403
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Fri, 25 Aug 2006 13:50:07 -0700
Links: << >>  << T >>  << A >>

The bit-banging is probably slower.

How are you handling your IOs?

You may wish to use tristate outputs
with your signals going to the T input:

 scl_iobuf : IOBUF
 port map (
 O  => open,
 IO => scl_pin,
 I  => '0',
 T  => scl_master );

 sda_iobuf : IOBUF
 port map (
 O  => sda_slave,
 IO => sda_pin,
 I  => '0',
 T  => sda_master );

Brad Smallridge
aivision
dot com


"Suzie" <eckardts@saic.com> wrote in message 
news:1156523258.644890.94200@i3g2000cwc.googlegroups.com...
> I'm developing on an ML403 evaluation board with a Virtex-4 device.
> I'm calling Xilinx's Level 0 I2C driver routines (XIic_Send, _Recv)
> from a PPC405 program running under the QNX OS.  I'm connecting to an
> external I2C device, a temp sensor/ADC, via the J3 header on the ML403.
>
> When scoping the I2C SDA and SCL lines, I often notice a missing bit
> within the 8-bit address word.  Obviously, when this happens, the
> addressed device does not ACK the transfer.
>
> I believe that my physical I2C connection is correct because I can
> successfully and consistently use the GPIO-I2C bit-banging approach (as
> implemented in Xilinx's iic_eeprom test program) to communicate with my
> external device.
>
> I'm not sure how my operating environment or the driver could cause
> this problem.  The address is supplied by a single byte-write to the
> OPB_IIC core's Tx FIFO register; that seems atomic to me.  My gut
> feeling is that there is a problem with the core.
>
> Anyone seen this problem, or know what I might be doing wrong??
> 



Article: 107255
Subject: Re: high level languages for synthesis
From: David Ashley <dash@nowhere.net.dont.email.me>
Date: 25 Aug 2006 22:51:27 +0200
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com wrote:
> The whole goal of creating an ANSI syntax and semantics C out of the
> FpgaC project is exactly that. Design, test, debug, in a normal "C"
> environment, and run under FpgaC on FPGA's. It's gone a little slower
> than I would like, as few people have good skill sets to join the "fun"
> doing the FpgaC development at this stage, and I have late life kids
> still in school to support.

I'm interested in this FpgaC. I don't want to avoid VHDL, but I'm
interested in improving the development tools + approaches.

One of the Neal Stephenson books, or more than one, makes an
interesting point about the concept of metaphores. Or it could
be an Eric S Raymond essay like at first there was the command line...

Anyway there are lots of programmers that use their HLL and never
lift the hood and to them the behind the scenes work is magic or
voodoo. I like looking inside and knowing how it works. Not everyone
needs to know the low level details -- but if you happen to be
comfortable/competent in all the different levels, you can really be
effective in solving problems or devising solutions.

So an FpgaC type approach, as in an HLL that takes care of all
the nitty gritty details and yet does an effective job (compared to
hand coding VHDL) would be a good thing.

In the 80's I used to program the Amiga computer, and I worked only
in 68000 assembly language. There were compilers, like Lattice and
Manx Aztec C, but I couldn't stand them. Why? Because the coding
turnaround time was so sloooow... I could make a change to my ASM
code and try it in seconds or fractions thereof. A compiler would take
10X longer. That adds up.

However computers got faster. Compilers got faster. Generated code
got better. System architectures got more optimized for compiled
code. I learned 'c'. All the code I wrote in 68000 assembly language
is basically...lost. It has no life now. So I'm no longer an ASM zealot.
But *at* *the* *time* it was ok to be one, and it happened to work.

So I wouldn't go back to ASM only, but I use ASM when it's appropriate.
Nowadays you do first pass in HLL then if speed/optimization is a
concern you just do small pieces in hand coded ASM -- a tiny fraction.

The parallel as previously mentioned by others between ASM and
HLL development seems to match exactly with VHDL/Verilog and
...what? Something that may not exist yet but ought to. Maybe it's
FpgaC.

So is there a web site for this or something?

-Dave



-- 
David Ashley                http://www.xdr.com/dash
Embedded linux, device drivers, system architecture

Article: 107256
Subject: Re: high level languages for synthesis
From: Jan Panteltje <pNaonStpealmtje@yahoo.com>
Date: Fri, 25 Aug 2006 20:51:42 GMT
Links: << >>  << T >>  << A >>
On a sunny day (25 Aug 2006 11:36:16 -0700) it happened fpga_toys@yahoo.com
wrote in <1156530963.995508.197810@i3g2000cwc.googlegroups.com>:

>FpgaC versions of both the RC5 and AES engines using loop unrolled
>pipelined code. Both projects brought out problems where FpgaC didn't
>implement things right, or did a poor job ... both of which provided
>the insights about what needs to be added to FpgaC or fixed, in Beta-3
>and Beta-4 this fall.
>
>It's also why I used this thread to ask for specific examples of things
>that are difficult to express in C ... to continue that process.
>
>> This is because these algos actually come from a hardware philosophy,
>> can be made easily with gates... Not 'sequential C code like' at all.
>
>BEEP .... wrong. The Algos are frequently math based, first implemented
>in C, and later ported to hardware.


Well, I agree with many things you say, many things have different angles
too (BTW I am pretty conversant still with 8080 or Z80, even wrote a CP/M
clone for Z80, in asm of course ;-)
 http://panteltje.com/panteltje/z80/index.html
that was just in a holiday... was fun, ran Software Toolworks C, more a
subset of modern C...) , but as far as for example DES goes, it was really
designed so it could be implemented in hardware for fast real time crypto,
at least that is the way I understood it, just a diagram with gates...

Sure you can make fine encryption with math.. that exists, better for 
a normal PC.

I have some time ago downloaded FPGA-C, but did not get around to try it
yet.
It could be interesting to for example take the ffmpeg H264 encoder C code
and generate HDL? Would that work?
That would be an acid test for me, if it could do that.
 
But it would perhaps take some fun out of figuring out how to do it
in Verilog, would be _very_ fast time to market for a video encoder....

Would have to be GPL too.

But however you slice it, C executes code sequential.
FPGA can have many circuits working in parallel.

Seems to me as a high level language you would want something more
object oriented? FPGA-C++ ? I do not like C++....
I just have to try that FPGA-C some time I think, else it is philosophy
only...
Maybe some project will come up.


Article: 107257
Subject: Re: high level languages for synthesis
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 14:05:30 -0700
Links: << >>  << T >>  << A >>

Jan Panteltje wrote:
> But however you slice it, C executes code sequential.
> FPGA can have many circuits working in parallel.

Actually that concept is wrong, the truth is that C resolves back
references using a sequential model, of last assignment must be
preserved, but that is true in hardware too ... whatever last sets a
memory cell or FF is it's defacto state.

C as a language syntax, and symantics makes ABSOULUTELY no claim as how
the implementation may, or may not, use concurrency whereever possible
to accellerate the algorithm specification. So, just as multi-issue
pipelined processors are free to reorder execution of some machine
code, the compiler is free to reorder, and parallelize the execution of
a C algorithm when executing as netlists in an FPGA.

Some C algorithms are inheriently serial .... memory accesses with
explict memory array referencing, others are not.  Good C coders
interesting in run time performance, understand computer architecture,
and can make reasonable algorithm choices for the target hardware.
Introducing C programing for FPGA execution, just extends the target
architectures a good programmer can exploit.

> Seems to me as a high level language you would want something more
> object oriented? FPGA-C++ ? I do not like C++....
> I just have to try that FPGA-C some time I think, else it is philosophy
> only...
> Maybe some project will come up.

Object Oriented as implemented in most variations assume dynamic
creation of objects at runtime, which is a little difficult with
current FPGA's to dynamically allocate the LUT's and routing to
instantiate run time objects .... when FPGA internal architectures
become openly documented and not locked behind proprietary rights and
stiff NDA then even that will be possible.


Article: 107258
Subject: Re: FPGA -> SATA?
From: "Martin E." <x@y.com>
Date: Fri, 25 Aug 2006 21:17:57 GMT
Links: << >>  << T >>  << A >>
We are designing with a V2P30 right now for migration to an equivalent V5 
Q1'07.  The SATA solution won't be needed until early next year.  Would V5 
work then?

Also, is SATA IP commercially available?

I guess an alternative might be to go PCI X/e and then use an off-the shelf 
SATA controller that talks to PCI.  The problem is that I need lots of 
drives in parallel (I do mean LOTS) for this application.  It'd be easier to 
hang them right off an FPGA with a PHY (which seem to be impossible to get).

Thanks,

-Martin


"Austin Lesea" <austin@xilinx.com> wrote in message 
news:44EF5DD5.5040502@xilinx.com...
> Martin,
>
> SATA worked, but not when it used the spread spectrum clocking.  There
> was also some out of band signaling issues, where you needed a
> transistor and a couple of resistors.
>
> So, it could be a point solution for a known drive that did not have
> spread spectrum, but it was not able to deal with the the broad spectrum
> of SATA product.
>
> Austin
>
> Antti wrote:
>> Martin E. schrieb:
>>
>>> I am looking for a way to read/write to a SATA drive from an FPGA.  I've
>>> looked around.  Nothing seems to fit the bill.  Any ideas worth 
>>> considering?
>>>
>>> Thanks,
>>>
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> Martin
>> Hi Martin,
>>
>> good question :)
>>
>> ML300 and digilent XUP V2Pro both have SATA connectors on
>> them but can not actually be used for SATA as of compliance issues.
>> (OOB and CDR lock range mainly)
>>
>> ASFAIK those issues are no longer present with V4FX that
>> should be fully SATA compliant without external workarounds.
>> So you can just get the ML410 and start working :)
>>
>> sure you would still need the IP core from some vendor though
>>
>> Antti
>> 



Article: 107259
Subject: Re: Running DDR below the min frequency
From: "PeteS" <PeterSmith1954@googlemail.com>
Date: 25 Aug 2006 14:22:09 -0700
Links: << >>  << T >>  << A >>
rick@mips.com wrote:
> PeteS wrote:
> > r...@mips.com wrote:
> > > This post might not be directly on topic here but I'm hoping that
> > > someone else out in FPGA-land might have come across a similar problem.
> > >
> > > The problem: I'm using an FPGA to do emulation of RTL that would
> > > normally be build into an ASIC.As such, therefore, the RTL is not very
> > > `FPGA friendly' and I can only get a speed of 33-40MHz, maybe 50 at a
> > > push. The trouble is that the system incorporates a DDR DRAM controller
> > > and DDR DRAMs have a min. frequency spec - 66MHz in the case of the
> > > ones I'm using - related to the internal DLL. I *think* that this is
> > > why I'm getting no response from the RAMs during read cycles & the data
> > > bus seems to be floating.
> > >
> > > I've tried running with both DLL enabled and disabled to no avail.
> > > [Maybe some manufacturers work in this mode and others don't].
> > >
> > > Any ideas ?
> >
> > The spec minimum for DDR is 83.33MHz. I have managed to make some run
> > at 66MHz, but I don't think you'll get them to run lower.
> >
> > You are correct that this is related to the internal DLLs in the DDR.
> >
> > If you can't bump the speed, I can't see that you'll be able to make it
> > work
> >
>
> Pete,
>
> With a little bit of advice & messing around with s/w I've managed to
> get DDR working in DLL-disabled mode @33MHz at least as far as getting
> the PROM monitor up  running.
>
> The basic advice was that with the DLL off the DDR DRAMs (may) ignore
> the CAS latency value programmed into the mode register and just kind
> of choose their own value. The detailed advice that in this case CL
> always = 2 was not right since for the Samsung based DIMMs I'm using we
> seem to have CL = 1.
>
> Rick

Well, if you are willing to play in DLL-Disabled mode, you can expect
to get interesting results. Certainly I have done so for test and eval
purposes (well, that's what I told the boss about all that time in the
lab ;)

My original point was that in normal operating mode (DLLs on, some CAs
as supported iaw datasheet), you probably won't get DDR to operate
below 66MHz

Cheers

PeteS


Article: 107260
Subject: Re: high level languages for synthesis
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 14:22:11 -0700
Links: << >>  << T >>  << A >>

David Ashley wrote:
> The parallel as previously mentioned by others between ASM and
> HLL development seems to match exactly with VHDL/Verilog and
> ...what? Something that may not exist yet but ought to. Maybe it's
> FpgaC.
>
> So is there a web site for this or something?

http://sourceforge.net/projects/fpgac

The first beta is generally usable on most of the TMCC platforms, as I
tried not to break much. The beta-2 cut broke a few things in
VHDL/Altera for the new extensions, none of which are difficult to fix,
and will hopefully get some attention this fall.

it's mostly usable for older Xilinx devices where it's possible to
hijack an XNF to EDIF conversion using earlier Xilinx releases. I
target most Virtex, Virtex-E, Virtex-2, and Virtex-Pro parts using
xnf2edf from ISE 4.2, then use ISE 6.2i to build with, all with command
line tools under Linux. Use xnf2edf.exe under wine, and native Linux
ISE 6.2i. Some things currently work amazingly well, and others are
mediocre at best, but there are clear ways to fix those.

See what's checked into SVN ... it's a bit more current. Fixes, help,
comments always welcome. There are several of us doing development
around time demands for work, families, etc ... but I mostly carry it.
I have several major changes, including a new symbolic boolean solver
in my personal development tree that will be checked soon. The LUT-4
based truth table design is fast, but produces relatively poor
technology mappings .... and doesn't scale with the 2^N nature of truth
tables, and the QM sum of products back end.

It's certainly not product yet. But I can see it reaching that by year
end -- with the Beta-4 release goals.

Have fun!


Article: 107261
Subject: Re: FPGA -> SATA?
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 25 Aug 2006 14:28:38 -0700
Links: << >>  << T >>  << A >>
Martin,

No, and No.  Sorry, even V5 does not have the frequency tracking agility
to track the SATA spread spectrum clock.  And because of that, we have
no IP for it, either.

The ASSP vendors are very protective about their business:  they
continue to make their little applications as tough to do as possible,
to keep out the 'big bad FPGA vendors' who seem to be eating up all
their businesses.  (Hey, we are just trying to make our customers happy!)

Too bad:  when an industry is spending time being defensive, they have
already lost - any time spent not innovating means you are doomed to
failure.

Austin

Martin E. wrote:
> We are designing with a V2P30 right now for migration to an equivalent V5 
> Q1'07.  The SATA solution won't be needed until early next year.  Would V5 
> work then?
> 
> Also, is SATA IP commercially available?
> 
> I guess an alternative might be to go PCI X/e and then use an off-the shelf 
> SATA controller that talks to PCI.  The problem is that I need lots of 
> drives in parallel (I do mean LOTS) for this application.  It'd be easier to 
> hang them right off an FPGA with a PHY (which seem to be impossible to get).
> 
> Thanks,
> 
> -Martin
> 
> 
> "Austin Lesea" <austin@xilinx.com> wrote in message 
> news:44EF5DD5.5040502@xilinx.com...
>> Martin,
>>
>> SATA worked, but not when it used the spread spectrum clocking.  There
>> was also some out of band signaling issues, where you needed a
>> transistor and a couple of resistors.
>>
>> So, it could be a point solution for a known drive that did not have
>> spread spectrum, but it was not able to deal with the the broad spectrum
>> of SATA product.
>>
>> Austin
>>
>> Antti wrote:
>>> Martin E. schrieb:
>>>
>>>> I am looking for a way to read/write to a SATA drive from an FPGA.  I've
>>>> looked around.  Nothing seems to fit the bill.  Any ideas worth 
>>>> considering?
>>>>
>>>> Thanks,
>>>>
>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>> Martin
>>> Hi Martin,
>>>
>>> good question :)
>>>
>>> ML300 and digilent XUP V2Pro both have SATA connectors on
>>> them but can not actually be used for SATA as of compliance issues.
>>> (OOB and CDR lock range mainly)
>>>
>>> ASFAIK those issues are no longer present with V4FX that
>>> should be fully SATA compliant without external workarounds.
>>> So you can just get the ML410 and start working :)
>>>
>>> sure you would still need the IP core from some vendor though
>>>
>>> Antti
>>>
> 
> 

Article: 107262
Subject: Re: Style of coding complex logic (particularly state machines)
From: "rickman" <gnuarm@gmail.com>
Date: 25 Aug 2006 14:32:00 -0700
Links: << >>  << T >>  << A >>
David Ashley wrote:
> mikegurche@yahoo.com wrote:
> > One interesting example in FSM design is the look-ahead output buffer
> > discussed in section 10.7.2 of "RTL Hardware Design Using VHDL"
> > (http://academic.csuohio.edu/chu_p/), the book mentioned in the
> > previous thread.  It is a clever scheme to obtain a buffered Moore
> > output without the one-clock delay penalty.  The code follows the block
> > diagram and uses four processes, one for state register, one for output
> > buffer, one for next-state logic and one for look-ahead output logic.
> > Although it is somewhat lengthy, it is easy to understand.   I believe
> > the circuit can be described by using one clocked process with proper
> > mix of signals and variables and reduce the code length by 3 quarters,
> > but I feel it will be difficult to relate the code with the actual
> > circuit diagram and vice versa.
>
> Combining the input and registered state this way allows for
> a non registered path from input to output. Is this ok? Or is
> there an assumption that the device connected to the output
> is itself latching on the clock edge?

I have not seen the reference, but I do FSM one of two ways.  If I need
to truely optimize things for speed or size or both, I separate my
logic from the register; otherwise I use a single clocked process for
both.  I always register my outputs just like the state and in essence
use lookahead for that.  But this happens in the same logic so it is
very easy to see.

I define the state diagram as a pseudo Mealy machine.  By pseudo Mealy
machine I mean that you define your outputs on the transitions rather
than the states with the realization that the output is only reflected
when the state changes.  Given a cur_state value, the transitions in
the diagram and the code both indicate the next_state and the
next_output.  The coding matches the diagram so coding is easier.


Article: 107263
Subject: Re: fastest FPGA
From: Jim Granville <no.spam@designtools.maps.co.nz>
Date: Sat, 26 Aug 2006 09:39:40 +1200
Links: << >>  << T >>  << A >>
hypermodest wrote:
> Hello.
> I've a task to make attempt to crack some cryptographical hash-function
> by using brute-force attack. So I wish to implement it in FPGA.
> How can I get fastest FPGA at the modern market?
> Altera Nios II dev kit stratix 2 edt (EP2S60) is the right choice?
> By the way, are these devices (EP2S60) can be overclocked? If yes, how?


Just as another reference point, on the speed/parallel processing 
question, this news from NEC : ( sounds like a real device )

http://www.eet.com/news/design/showArticle.jhtml;jsessionid=50HDYO5YS5YXCQSNDLRSKH0CJUNN2JVN?articleID=192300291

Claims this :

"Imapcar has 128 processing elements, each with embedded memory. The 128 
parallel processing elements use the SIMD (single instruction stream 
multiple data stream) method. Each element processes four instructions 
per cycle. Thus, total performance was 100 Gops running at 100 MHz, 
enabling real-time image recognition at 30 frames per second."

and this:
http://www.necel.com/news/en/archive/0608/2501.html

Can't see mention of how much embedded memory ?

-jg


Article: 107264
Subject: Re: ISERDES strange simulation behaviour
From: "=?iso-8859-1?B?R2FMYUt0SWtVc5k=?=" <taileb.mehdi@gmail.com>
Date: 25 Aug 2006 14:53:22 -0700
Links: << >>  << T >>  << A >>

GaLaKtIkUs=99 wrote:
> Antti wrote:
> > GaLaKtIkUs=99 schrieb:
> >
> > > In the Virtex-4 user guide (ug070.pdf p.365 table 8-4) it is clearly
> > > indicated that for INTERFACE_TYPE=3DNETWOKING and DATA_RATE=3DSDR the
> > > latency should be 2 CLKDIV clock periods.
> > > I instantiated an ISERDES of DATA_WIDTH=3D6 but I see that valid outp=
ut
> > > appears on the next CLKDIV rizing edge.
> > > Any explanations?
> > >
> > > Merci d'avance!
> >
> > advice: dont belive the simulator, its not always correct.
> > place the iserdes and chipscope ILA into dummy toplevel, load some FPGA
> > and look what happens in real silicon.
> >
> > Antti
>
> Unfortunately the tests on the board using Chipscope gave the same
> results as in simulation.
> I looked for informations on this issue on Xilinx's site but I didn't
> found any thing.
> So I assume that the issue is that I didn't understand the table 8-4 in
> the Virtex-4 UserGuide.
> If you can help you're welcome (I can send you
> simulation/implementation files I used).
> I'm going to make the same simulations/tests on-board as described a
> few posts higher but for wordlengths>6 i.e where 2 ISERDES are needed.
>
> Cheers

The same thing for wordlength=3D8: the latency=3D1 clkdiv period in
NETWORKING mode.
Please can anybody confirm or infirm the information on table 8-4 un
Virtex-4 User Guide?

Cheers


Article: 107265
Subject: Re: Style of coding complex logic (particularly state machines)
From: mikegurche@yahoo.com
Date: 25 Aug 2006 15:00:04 -0700
Links: << >>  << T >>  << A >>
backhus wrote:
> > In my original post I had no intention to reach a common consensus. I
> > wanted to see practical code examples which demonstrate the various
> > techniques and discuss their relative merits and disadvantages.
> >
> > Kind regards,
> > Eli
>
> Hi Eli,
> Ok, that's something different.
> Earns some contribution from my side :-)
>
> My example uses 3 Processes.
> The first one is the simple state Register.
> the second is the combinatocrical branch selection,
> The third creates the registered outputs.
>
> Recognize that the third process uses NextState for the case selection.
> Advantage: Outputs change exactly at the same time as the states do.
> Disadvantage: The branch logic is connected to the output logic, causing
>   longer delays.
> Workaround: If a one clock delay of the outputs doesn't matter, Current
> State can be used instead.
>
> The only critical part I see is the second process. Because it's
> combinatorical some synthesis tools might generate latches here, when
> the designer writes no proper code. But we all should know how to write
> latch free code, don't we? ;-)
>
> The structure is very regular, which makes it a useful template for
> autogenerated code.
>
> Have a nice synthesis
>     Eilert
>
> ENTITY Example_Regout_FSM IS
>    PORT (Clock : IN STD_LOGIC;
>          Reset : IN STD_LOGIC;
>          A : IN STD_LOGIC;
>          B : IN STD_LOGIC;
>          Y : OUT STD_LOGIC;
>          Z : OUT STD_LOGIC);
> END Example_Regout_FSM;
>
>
> ARCHITECTURE RTL_3_Process_Model_undelayed OF Example_Regout_FSM IS
>    TYPE State_type IS (Start, Middle, Stop);
>    SIGNAL CurrentState : State_Type;
>    SIGNAL NextState : State_Type;
>
> BEGIN
>
>    FSM_sync : PROCESS(Clock, Reset)
>      BEGIN -- CurrentState register
>        IF Reset = '1' THEN
>          CurrentState <= Start;
>        ELSIF Clock'EVENT AND Clock = '1' THEN
>          CurrentState <= NextState;
>        END IF;
>    END PROCESS FSM_sync;
>
>    FSM_comb : PROCESS(A, B, CurrentState)
>      BEGIN -- CurrentState Logic
>        CASE CurrentState IS
>          WHEN Start =>
>            IF (A NOR B) = '1' THEN
>              NextState <= Middle;
>            END IF;
>          WHEN Middle =>
>            IF (A AND B) = '1' THEN
>              NextState <= Stop;
>            END IF;
>          WHEN Stop =>
>            IF (A XOR B) = '1' THEN
>              NextState <= Start;
>            END IF;
>          WHEN OTHERS => NextState <= Start;
>        END CASE;
>    END PROCESS FSM_comb;
>
>    FSM_regout : PROCESS(Clock, Reset)
>      BEGIN -- Output Logic
>        IF Reset = '1' THEN
>          Y <= '0';
>          Z <= '0';
>        ELSIF Clock'EVENT AND Clock = '1' THEN
>          Y <= '0';  -- Default Value assignments
>          Z <= '0';
>        CASE NextState IS
>          WHEN Start => NULL;
>          WHEN Middle => Y <= '1';
>                         Z <= '1';
>          WHEN Stop => Z <= '1';
>          WHEN OTHERS => NULL;
>        END CASE;
>      END IF;
>    END PROCESS FSM_regout;
> END RTL_3_Process_Model_undelayed;

Love the enemy :)
(I hope the code is right)

ENTITY Example_Regout_FSM IS
   PORT (Clock : IN STD_LOGIC;
         Reset : IN STD_LOGIC;
         A : IN STD_LOGIC;
         B : IN STD_LOGIC;
         Y : OUT STD_LOGIC;
         Z : OUT STD_LOGIC);
END Example_Regout_FSM;

ARCHITECTURE RTL_1_Process_Model_undelayed OF Example_Regout_FSM IS
   TYPE State_type IS (Start, Middle, Stop);
   SIGNAL CurrentState: State_Type;
BEGIN
   FSM_one_for_all: PROCESS(Clock, Reset)
     VARIABLE NextState: State_Type;
   BEGIN
       IF Reset = '1' THEN
         CurrentState <= Start;
         Y <= '0';
         Z <= '0';
       ELSIF Clock'EVENT AND Clock = '1' THEN
         -- variable used to repsents the o/p of next-state logic
         CASE CurrentState IS
           WHEN Start =>
             IF (A NOR B) = '1' THEN
               NextState := Middle;
             END IF;
           WHEN Middle =>
             IF (A AND B) = '1' THEN
               NextState := Stop;
             END IF;
           WHEN Stop =>
             IF (A XOR B) = '1' THEN
               NextState := Start;
             END IF;
          WHEN OTHERS => NextState := Start;
         END CASE;

         -- to register
         CurrentState <= NextState;

         -- to the buffered output logic
         Y <= '0';  -- Default Value assignments
         Z <= '0';
         CASE NextState IS
           WHEN Start => NULL;
           WHEN Middle => Y <= '1';
                          Z <= '1';
           WHEN Stop => Z <= '1';
           WHEN OTHERS => NULL;
         END CASE;
       END IF;
   END PROCESS FSM_one_for_all;
END  RTL_1_Process_Model_undelayed;

Peace

Mike G.


Article: 107266
Subject: Re: fastest FPGA
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 15:35:12 -0700
Links: << >>  << T >>  << A >>

Jim Granville wrote:
> "Imapcar has 128 processing elements, each with embedded memory. The 128
> parallel processing elements use the SIMD (single instruction stream
> multiple data stream) method. Each element processes four instructions
> per cycle. Thus, total performance was 100 Gops running at 100 MHz,
> enabling real-time image recognition at 30 frames per second."

It will be interesting to see if that ends up in NEC's next generation
super computers too. If the chip has reasonable one board memory, and
lots of off board bandwidth, it's surely a monster :)

I'm still waiting to see how the Cell processor matures into other
product lines besides gaming.


Article: 107267
Subject: Re: RocketIO over cable
From: Ed McGettigan <ed.mcgettigan@xilinx.com>
Date: Fri, 25 Aug 2006 15:41:06 -0700
Links: << >>  << T >>  << A >>
vt2001cpe wrote:
> Thank you everyone for your quick and informative responses! In the
> event of a large number of bit errors, is it possible for a comma
> character to be incorrectly introduced? What would the effect be? Will
> the CRC fail and put the state machine into an error state, or will
> data continue to transmit, unaware?

It is possible to generate more than a single bit error in the same
8b10b word, but with a well designed link this is very unlikely to
occur.  In any case your transmitter is completely unaware that an
error actually occurred and will happily continue transmitting. It's
your receiver logic needs to be able to handle any errors that happen.

The worst case that could happen is that the error generates a comma
character that is out of alignment with the previous alignment.  This
would cause a constant stream of errors until a correct comma character
is received and the link realigns to the correct 8b10b word. All of
this is fairly standard stuff and has been around for decades in
many different protocols.

Since it doesn't look like you have a particular protocol in mind. I
would suggest that you take a look at the light weight Aurora protocol
http://www.xilinx.com/aurora  Even if you decide to not use it, you
should be able to learn a lot from it.

Ed McGettigan
--
Xilinx Inc.



Article: 107268
Subject: Re: FPGA -> SATA?
From: Jim Granville <no.spam@designtools.maps.co.nz>
Date: Sat, 26 Aug 2006 10:47:51 +1200
Links: << >>  << T >>  << A >>
Austin Lesea wrote:
> Martin,
> 
> No, and No.  Sorry, even V5 does not have the frequency tracking agility
> to track the SATA spread spectrum clock.  And because of that, we have
> no IP for it, either.
> 
> The ASSP vendors are very protective about their business:  they
> continue to make their little applications as tough to do as possible,
> to keep out the 'big bad FPGA vendors' who seem to be eating up all
> their businesses.  (Hey, we are just trying to make our customers happy!)
> 
> Too bad:  when an industry is spending time being defensive, they have
> already lost - any time spent not innovating means you are doomed to
> failure.

That probably depends on where you are standing.

  Could be, that the FPGA sector needs to innovate, and include 
sufficent agility to track a SATA spread spectrum clock ?

  Sounds more an issue of who decides the market is large enough to 
bother with, than any perceived fpga-vs-assp battles ?

-jg


Article: 107269
Subject: Re: FPGA -> SATA?
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 16:14:29 -0700
Links: << >>  << T >>  << A >>

Austin Lesea wrote:
> The ASSP vendors are very protective about their business:  they
> continue to make their little applications as tough to do as possible,
> to keep out the 'big bad FPGA vendors' who seem to be eating up all
> their businesses.  (Hey, we are just trying to make our customers happy!)

And like Xilinx isn't equally protective and prolific with FPGA
patents?

> Too bad:  when an industry is spending time being defensive, they have
> already lost - any time spent not innovating means you are doomed to
> failure.

Maybe Xilinx just needs to join the ASSP group, license some
technology,
and quit bitching.


Article: 107270
Subject: Re: fastest FPGA
From: Jim Granville <no.spam@designtools.maps.co.nz>
Date: Sat, 26 Aug 2006 12:03:55 +1200
Links: << >>  << T >>  << A >>
fpga_toys@yahoo.com wrote:
> Jim Granville wrote:
> 
>>"Imapcar has 128 processing elements, each with embedded memory. The 128
>>parallel processing elements use the SIMD (single instruction stream
>>multiple data stream) method. Each element processes four instructions
>>per cycle. Thus, total performance was 100 Gops running at 100 MHz,
>>enabling real-time image recognition at 30 frames per second."
> 
> 
> It will be interesting to see if that ends up in NEC's next generation
> super computers too. If the chip has reasonable one board memory, and
> lots of off board bandwidth, it's surely a monster :)
> 
> I'm still waiting to see how the Cell processor matures into other
> product lines besides gaming.

Yes, it seems a very good idea, in the specialised research niches, to
watch closely the chip output of the large revenue areas, like gaming, 
and now automotive vision - after all, their R&D spend make the FPGAs 
look like toys..

-jg


Article: 107271
Subject: Re: fastest FPGA
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 17:09:00 -0700
Links: << >>  << T >>  << A >>

Jim Granville wrote:
> fpga_toys@yahoo.com wrote:
>  after all, their R&D spend make the FPGAs look like toys..

Yep ... fpga_toys :)

And they are real successful companies, capable of eating Xilinx with
chump change and an after thought.


Article: 107272
Subject: Re: fastest FPGA
From: "Peter Alfke" <alfke@sbcglobal.net>
Date: 25 Aug 2006 17:35:26 -0700
Links: << >>  << T >>  << A >>

I almost threw up when I read John Bass explain how he uses a
deliberately dumb-sounding name plus naive questions to light some fire
and create controversial flames.

Really destroys my enthusiasm for this newsgroup.
It's like a rich guy pretending to be homeless, and then hitting you in
the groin while you are reaching for your wallet. Scum is the word
describing that kind of behavior...

I'll try to get over this, and will fly to Madrid for the European FPL
conference.
Hopefully more civilized discussions there.
BTW, I'll have an ML501 board with me, just to prove a point or two.

Peter Alfke


Article: 107273
Subject: Re: fastest FPGA
From: fpga_toys@yahoo.com
Date: 25 Aug 2006 17:51:15 -0700
Links: << >>  << T >>  << A >>

Peter Alfke wrote:
> I almost threw up when I read John Bass explain how he uses a
> deliberately dumb-sounding name plus naive questions to light some fire
> and create controversial flames.

That's alright Peter ... when assholes assert shit based on their
reputations, it's a pretty sad state of affairs on this list. You and
Austin seems to get pretty upset when your ammonium nitrate is called
what it is.

You can keep up the crap about Xilinx not having problems with real
customer designs, but the truth is there are clearly some design
segments that Xilinx FPGA's crap out. You can hide behind poorly
specified chips, and claim it's the developers that are doing something
wrong when valid designs trash the chips power rails. But frankly there
isn't anything in the Xilinx literature or data sheets to warn
developers that high density designs are at high risk.

You can get as pissy as you want about it ... and continue to show
absolutely poor social skills in handling customers, and shitting on
the little people on this list.

Upset for being trapped by being asholes in your postings to
Totally_Lost .... consider that is your behavior, not mine ... take
responsibility for it.


Article: 107274
Subject: Re: fastest FPGA
From: David Ashley <dash@nowhere.net.dont.email.me>
Date: 26 Aug 2006 02:51:58 +0200
Links: << >>  << T >>  << A >>
Peter Alfke wrote:
> I almost threw up when I read John Bass explain how he uses a
> deliberately dumb-sounding name plus naive questions to light some fire
> and create controversial flames.
> 
> Really destroys my enthusiasm for this newsgroup.
> It's like a rich guy pretending to be homeless, and then hitting you in
> the groin while you are reaching for your wallet. Scum is the word
> describing that kind of behavior...
> 
> I'll try to get over this, and will fly to Madrid for the European FPL
> conference.
> Hopefully more civilized discussions there.
> BTW, I'll have an ML501 board with me, just to prove a point or two.
> 
> Peter Alfke
> 

Peter,

Don't be offended, I don't think he meant it in any destructive way.
I for one appreciate your presence + that of other xilinx/altera/etc.
individuals. On the internet and usenet in particular one has to
keep some distance as well as have a thick skin.

In the little bit I've been here I've realized these groups are a
very valuable resource, and questions I've posted and received
answers to have gotten me over a deadblock. It's worthwhile
spending the time and contributing, even from a business
perspective, IMO. Stick around!

I'm off on vacation through Sept 1. In case people post and I
don't follow up.

-Dave
-- 
David Ashley                http://www.xdr.com/dash
Embedded linux, device drivers, system architecture



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search