From: Fabrice on
Hi,

I came upon this paper by Jonathan Poritz (http://www.poritz.net/
jonathan/papers/weak.pdf) which described potential weakness in the
Hash Chains used in the Trusted Computing specs, and propose to
replace the Hash Chains by a Cryptographic Accumulator of which he
shows an example off.
(That paper is easy to understand and I would recommend for anybody,
even if you dont have a PHD)

I kept googling for more on Cryptographic Accumulators and found a
couple of papers on them (which are a lot harder to read, PHD may be
required)
http://www.cs.nyu.edu/~nicolosi/papers/accumulators.pdf
http://www.cs.jhu.edu/~goodrich/cgc/pubs/accum.pdf


My problem here is that those papers seems to use Cryptographic
Accumulators to solve a very different problem than Trusted Computing
and the Poritz paper are talking about.


Namely, the problem in TC (and Poritz paper) is to find a way if all
the elements that have been used to produce the Crypto Accumulator
value (or Hash Chain value) are in a known set (of trusted elements).

Whereas the other papers seems to use the Crypto Accumulator to figure
out if one specific element is part of the set of elements used to
generate the Crypto Accumulator value.

So, am I looking at two different tools here, or is that really the
same tool that is used for two very different purpose?

Thanks

From: Fabrice on
Bump...

Really, nobody has some answer to contribute ?

On Jan 26, 3:34 pm, Fabrice <fabrice.gaut...(a)gmail.com> wrote:
> Hi,
>
> I came upon this paper by Jonathan Poritz (http://www.poritz.net/
> jonathan/papers/weak.pdf) which described potential weakness in the
> Hash Chains used in the Trusted Computing specs, and propose to
> replace the Hash Chains by a Cryptographic Accumulator of which he
> shows an example off.
> (That paper is easy to understand and I would recommend for anybody,
> even if you dont have a PHD)
>
> I kept googling for more on Cryptographic Accumulators and found a
> couple of papers on them (which are a lot harder to read, PHD may be
> required)http://www.cs.nyu.edu/~nicolosi/papers/accumulators.pdfhttp://www.cs.jhu.edu/~goodrich/cgc/pubs/accum.pdf
>
> My problem here is that those papers seems to use Cryptographic
> Accumulators to solve a very different problem than Trusted Computing
> and the Poritz paper are talking about.
>
> Namely, the problem in TC (and Poritz paper) is to find a way if all
> the elements that have been used to produce the Crypto Accumulator
> value (or Hash Chain value) are in a known set (of trusted elements).
>
> Whereas the other papers seems to use the Crypto Accumulator to figure
> out if one specific element is part of the set of elements used to
> generate the Crypto Accumulator value.
>
> So, am I looking at two different tools here, or is that really the
> same tool that is used for two very different purpose?
>
> Thanks

From: Kristian Gj�steen on
Fabrice <fabrice.gautier(a)gmail.com> wrote:
>> So, am I looking at two different tools here, or is that really the
>> same tool that is used for two very different purpose?

I've looked very briefly at the paper. As far as I can tell, the author
wants to use an accumulator as a hash chain where order does not matter.

I don't know if that is sensible or not.

--
Kristian Gj�steen
From: Ilmari Karonen on
On 2010-01-26, Fabrice <fabrice.gautier(a)gmail.com> wrote:
>
> I came upon this paper by Jonathan Poritz (http://www.poritz.net/
> jonathan/papers/weak.pdf) which described potential weakness in the
> Hash Chains used in the Trusted Computing specs, and propose to
> replace the Hash Chains by a Cryptographic Accumulator of which he
> shows an example off.

I haven't read the entire paper yet, but I noticed something funny
about section 2.2. He defines the binary operation a*b := H(a || b),
where H is a hash function and || is concatenation, on the space X of
possible outputs of H, and claims that (X,*) should be a quasigroup --
that is, that for all a in X, the maps x -> a*x and x -> x*a should
both be permutations of X -- or else "H is surely a rather poor hash
function."

This seems like an odd conclusion; in particular, it would seem
extremely unlikely for such a property to be true of a random oracle,
which would surely be the best possible hash function there can be.

Also, assume that H is an n-bit hash, with or without this property,
and define the k-bit hash H', k < n, as H with its output truncated to
some k out of the n output bits of H. It would seem unlikely for H'
to retain this quasigroup property even if H did have it; and indeed,
if it did retain it for all choices of k < n and all choices of k out
of n bits, that would seem to force some significantly non-random
structure on the outputs of H.

Even if we considered only traditional Merkle-Damg�rd hashes with
output size equal to block size, the quasigroup property would be
equivalent to the compression function f of the hash being "doubly
bijective" -- that is, to both x -> f(a,x) and x -> f(x,b) being
permutations for all a and b (assuming that the finalization function
is bijective, which I think should usually be the case). I don't
think that any of the usual constructions of compression functions
guarantee even one of these properties, much less both.

Mind you, not having read the whole paper, I have no idea if this
issue in any way affects the main results. It's just a detail that
jumped out at me while I was reading.

--
Ilmari Karonen
To reply by e-mail, please replace ".invalid" with ".net" in address.
 | 
Pages: 1
Prev: Hash combining
Next: C(n,r) in C using libgmp