From: Tamas K Papp on
On Fri, 16 Jul 2010 13:07:48 -0700, Francogrex wrote:

> On Jul 14, 11:48 am, Tamas K Papp <tkp...(a)gmail.com> wrote:
>> Hi,
>>
>> I am experimenting with calling JAGS (an automated Gibbs sampler, not
>> unlike BUGS) from CL.
>
> I can't help with the coding but if you succeed in calling BUGS or JAGS
> from CL please keep me informed. How do you intend to supply the data
> and send the command for simulations (because I see you are only sending
> the model). Maybe a look at R libraries like BRUGS and R2winbugs can
> help with your project. Another possibility is to use the CL library
> RCL, connect to R and from it run BRUGS (or JAGS).

For the record: I gave up on using JAGS for the moment. It takes ages
for the data I am working on, as even the simplest models generate a
graph with 200000+ nodes.

I am getting much better results using manually coded Gibbs samplers.
I decided to code these very flexibly, so that they can be reused in
different contexts. Using LLA as a backend, block updates are very
fast in my models. Hacking up something like Umacs in CL was trivial
to do (about 200 LOC, as opposed to Umacs's 6000 LOC in R -- CL really
amazes me sometimes :-).

But if I were to proceed with JAGS, I would just dump the data into a
file in R's format, it is quite easy to do. The
CL<->RCL<->R<->rjags<->jags path you are suggesting would be a
nightmare. I decided not to use JAGS in the first place because
automated Gibbs samplers are murky black boxes, with little
possibility of profiling or examining what's going on inside, and I
found that support for JAGS is scarce (there is no mailing list, and
the original author didn't reply to my e-mails, but then again, this
is a season when many people go an vacations). Putting 3 other layers
in between for no good reason is the perfect recipe for disaster.

Tamas
From: Francogrex on
On Jul 17, 10:51 am, Tamas K Papp <tkp...(a)gmail.com> wrote:
> I am getting much better results using manually coded Gibbs samplers.

I did code manually as well in R (using both Gibbs and the Metropolis-
Hastings algorithm). It's ok for relatively simple models
(statistically mine was if I remember well: N[i] ~ Poisson
(L[i]*E[i]);
L[i] ~ p* Gamma(alpha1,Beta1) + (1-p)* Gamma(alpha2,beta2);
The Ns and Es were mu data and the rest unknowns (alphas and betas and
p were either determined by max lik so it was an empirical bayesian
model, or by putting some priors on them).
For more complex models I think it will not be that easy to code
manually as you need to pay attention to several factors, including
convergence and auto correlations etc.
From: Tamas K Papp on
On Sat, 17 Jul 2010 13:01:06 -0700, Francogrex wrote:

> by putting some priors on them). For more complex models I think it will
> not be that easy to code manually as you need to pay attention to
> several factors, including convergence and auto correlations etc.

Those issues are orthogonal to JAGS/OpenBUGS. Automated Gibbs
sampling neither guarantees nor checks for convergence. In fact,
understanding the problem domain and sampling in blocks that conform
to the structure of the problem frequently helps to get better mixing.
Conversely, unless the automated sampler is optimized for a specific
application domain (eg GeoBUGS), it may end up with inferior mixing,
compared to hand-coded samplers. Note that I am not bashing
JAGS/OpenBUGS here, as for simple models, they give a quick and dirty
solution, but there is a price to pay.

Before knowing Lisp, I would have opted for automated samplers in most
cases, as coding these models manually involves a lot of boilerplate
code in most languages. But now that I know a little CL, I can
abstract out the repeated parts, so it is becoming much easier. BTW,
when I have a bit of free time, I plan to study HBC [1] in detail, it
seems to be a nice approach, maybe worth reimplementing in CL.

Best,

Tamas

[1] http://www.cs.utah.edu/~hal/HBC/