On Mon, Apr 4, 2016 at 4:02 PM, Buck Calabro <kc2hiz@xxxxxxxxx> wrote:
I think the context, the framework that Schneier is working with when he
says that 'obscurity is insecurity' is Kerckhoff's principle, which can
be paraphrased as 'the system should remain secure even if the enemy has
a copy of the algorithm.'
Subtle point: That security principle is actually named after a guy
named Auguste Kerckhoffs. The last 's' is part of his name. So
depending on your school of grammar, it would either be "Kerckhoffs's
principle" (which I personally consider correct) or "Kerckhoffs'
principle" (probably less defensible to prescriptivists, but this
construction is common and may well be gaining popularity).
The Wikipedia article is a great read:
https://en.wikipedia.org/wiki/Kerckhoffs's_principle
One of the gray areas in the whole "security through obscurity"
kerfuffle is whether or not you should at least *try* to keep your
methods secret. I don't think you will find many people who seriously
believe that having obscurity be a *linchpin* of security, especially
digital security, is a good idea. That's right out the window. You may
be able to find more people who believe the Earth is flat.
Sure, it's fine if you keep the exact
algorithm you choose to use a secret as long as that algorithm has been
tested and vetted in the open by experts.
The above is also probably uncontroversial. It assumes there are
several equally strong, or at least several "strong enough" choices,
whose strength has been verified in the open by the top experts. In
that case, I suppose you won't find much complaint for the practice of
choosing *one* of them and then not telling anyone which one you
chose.[1]
Where I think you find meaningful differences of opinion is on the
question of whether there is such a thing as a cryptographic expert
(or team of such) that is sufficiently strong that it is *more secure*
for them to develop an algorithm in secret, and use that (with the
idea that the "obscurity" is an *extra* hurdle for would-be attackers,
on top of the intrinsic strength of the algorithm itself).
I believe that is what Nathan was getting at when he said
But there are cases where security is enhanced by experts
developing unknown algorithms.
Even if that is true, some would argue that those kinds of secrets are
the kind that can be overcome by theft, espionage, bribery, physical
force, etc.; and that getting past those hurdles are relatively minor
(for malicious parties with sufficient resources and determination)
compared to the hurdle posed by the intrinsic strength of a well
designed algorithm.
And pushing that latter view to its logical conclusion, it makes sense
then to get the best possible *intrinsic* algorithmic strength. I
believe all the top experts would be more comfortable that they have
achieved the strongest possible algorithm if they expose the
development to as many eyes as possible rather than to as few eyes as
possible.
[1]One big take-away that I've gotten from my reading is that it is
quite easy to misapply strong hashes or ineffectively implement
theoretically strong algorithms. If you yourself are not an expert
(and if you don't know whether you are, then you're definitely not),
do not try to roll your own. You may think "I'll just take these
*universally acknowledged* strong building blocks, and combine them
and rearrange them in my own secret way that no one else knows about,
and that will necessarily be more secure than any of those individual
building blocks used alone". Uh, no. Chances are that your homegrown
Frankencipher will actually be *worse* than any of the single (proven)
components within it. Please don't.
John Y.
As an Amazon Associate we earn from qualifying purchases.