Why new Secure Internet solutions are technically Hard

Information Security is both very hard and very easy at the same time.

Not only are Internet Nasties a nuisance, or worse, they prevent the  new, useful Applications and Networks like e-Commerce, i-EDI, e-Health, e-Banking, e-Government and other business/commercial transactions systems.

Perfect Security isn't possible: ask any bank.

Defenders need to be 100.00% correct, every minute of every day.
Attackers need just one weakness for a moment to get in.

Not all compromises/breaches are equal: from nothing of consequence, up to being in full control with system owners not being aware of it.

All 'Security Systems' can only be "good enough" for their role, which depends on many factors.
How long do you need to keep your secrets? Minutes or Decades?

Building a system isn't an end-point:
"Information Security is a journey, not a destination" (Schneier)
Security has two aspects:
  • creation and
  • operation or patrolling.
Not only do you have to build it safe, you have to work to keep it safe and have real-time Intrusion Alarms being monitored: just like in the real-world.

Best efforts are needed to provide the mechanisms to keep Information Confidential + Correct, but equal attention has to be paid to detecting and tracing breaches and exploits.

The usual acronyms used for Information Security are "CIA" and "AAA":
  • Confidentiality (secrets stay secret)
  • Integrity (data isn't changed, added, deleted unless authorised)
  • Availability, ("denial of service". Can't get to your data == No system)
  • Authentication, (proof of Identity)
  • Authorisation, (what the ID can do) and
  • Accounting/Auditing (what was done)

If this were all there were to it, then the Internet wouldn't be riddled with malware, keyloggers, worms, SPAM, etc.

There's more...

First, all systems have to be "hardened":
 be difficult to break into.
Second is Administrative Control:
 every system and application change can only made by a few competent, diligent people authorised to do so.

Once unauthorised systems changes can occur, the system is untrustworthy and can't be used to store, handle, transmit confidential information.
User-controlled systems, not just poor systems and Apps, are the source of the woeful state of the Internet.

This is why all efforts to create general Internet Security solutions are doomed before they start.

There are thousands of Secure Internets running in Government, Defence and Finance/Banking.
They use a simple technique:
  • strong security boundaries, or "air gaps".
Within these networks, with their 'military grade' encrypted links, there is no call for "secure protocols". They run secure webservers with plain HTTP, not https/SSL.

Once you have hardened systems, nailed down links, controlled systems and strong identification + authentication of users, there's no need for complex protocols or multiple levels of encryption.

Under these conditions, even Windows is good enough and doesn't need virus/malware scanning to keep it clean.
A serious and determined hacker always targets the weakest points, so stronger systems are used for the most secret or sensitive networks.

The approach is called "defense in depth" versus "hard shell, soft centre".
Getting through the first/outer defence shouldn't give attackers the keys of the kingdom.

The first commercial computers were built and sold in 1950, by 1968 there was enough concern with failing projects that NATO sponsored the first conference on "Software Engineering".
  • 1969 saw the birth of both the Internet and Unix (and 'C', the universal programming language)
  • 1970 saw the enumeration and formalisation of Computer Security principles.
  • 1977 saw the invention of usable "Public Key" encryption systems.
By 1980, everything that was needed to prevent and control all the evils running around unchecked on the Internet were known and being practised somewhere.

The problem isn't one of Knowledge, but of implementation.

The specific problem is endemic in I.T.:
  • "reinventing the wheel"
Unless you've a PhD and a decade or two of "industry" experience, you really don't want to be reinventing 'Computer Security'. If you do need to invent something new, you need to go through a public and extended examination by experts in the field.
Just as the US DoD did when developing AES (Advanced Encryption Standard).

Even the encryption algorithm dreamed up by the ITU, the world body for telephone standards, for GSM phones (A4) was quickly cracked when exposed to the professionals in the field...

"Security by obscurity"  has been proven repeatedly to be a fools' paradise.
The algorithmns and processes need to be public and transparent, and well examined. Operational security comes does to "key management" and diligence.

So what might solutions look like?
  • PKI isn't "the answer":
    Single point of attack - too much trust in an external entity without "skin in the game".
    Hijacked connections and wrongly granted certificates are easy exploits.
  • Nor are VPN's "the answer": 
    They don't identify people and are useless without hardened/controlled systems everywhere.
There are a whole raft of systems out there on the Wild Wild Web that are both (very) high-value targets and have to be cheap and effective: servers run by ISP's and hosting providers.
They do get compromised occasionally, but not in large numbers or for long. Nothing like the 'botnets' of millions, yes, millions, of Windows machines.

Simple tools that work and are used by those that care...
  • SSH: 1995. 1997 SSH-2 Internet-Draft
  • PGP: 1991
  • POSIX (~1990) [Linux, Unix, Solaris, AIX, OS/X]  and SELinux (2003) for the truly paranoid.
PGP, "Pretty Good Privacy", is much stronger than it might sound. (Computer Nerds often indulge in puns and understatement). The US Government regarded it as so good, it's creator, Phil Zimmerman, was charged with exporting Military "munitions". It took around 5 years for the case to be dropped.

Eugene Kaspersky, the principal of security company " Kaspersky", is advocating for Global Internet Passport for everything:
"Everyone should and must have an identification, or internet passport," he was quoted as saying. "The internet was designed not for public use, but for American scientists and the US military. Then it was introduced to the public and it was wrong...to introduce it in the same way."
Elsewhere he notes that smartphones partly solve the Identity issue. Each mobile phone has a unique identifier: an IMEI. He suggests that can be associated with an individual, in the same way that phone numbers are. Whilst a start, there is a basic flaw, who's using the device?

Any Global system smacking of "Big Brother" just isn't going to happen. Revolutions have been caused by less.

There is also a more subtle conflation between two different needs/worlds:
  • when you need trusted, secure systems and
  • when the freewheeling, anarchic Wild Wild Web is just right.

But the notions behind Kaspersky's proposal are good and fundamental to any Secure Internet solution:
  • registered and identified devices, like mobile phones
  • strong user identification and forced authentication
  • off-line processes associating people (and how to find them) with digital identities
  • explicit registration or authorisation of a device and user onto a service
    with the inverse, repudiation of compromised, fake/forged/stolen/lost identities.
  • administrative control of all devices on a network.
These are already what is done globally on mobile phone networks.
It's a model that's known to work, scales to immense numbers, is locally administered and socially acceptable

No comments: