Trust in Cryptography – Part 1: Snake oil

By Scram Software 10 October 2017

When purchasing a cryptography product, consumers are forced to trust the claims of the manufacturer or salesperson. It’s well beyond the technical ability of most users (and even experienced software developers, system administrators and other I.T. professionals) to verify claims about security.

Why is this the case? Because correctly-encrypted data looks random. But it’s virtually impossible to tell whether something has been encrypted securely or insecurely just by looking at it – “good” random looks just the same as “bad” random. Can you tell if these two sets of numbers come from good or bad encryption?

e1 59 4d 68 7c cd 1c 3c  áYMh|Í.<
29 d8 99 03 26 fe 65 2d  )ؙ.&þe-
31 b0 cc 35 cf 76 d4 a5  1°Ì5ÏvÔ¥
dc a5 bf 73 2e 2d 50 9b  Ü¥¿s.-P›
8e 69 46 c1 bb 78 3f e6  ŽiFÁ»x?æ
12 d8 a7 88 8c 78 a1 44  .اˆŒx¡D
8b 61 d2 66 7f 5b b9 0c  ‹aÒf[¹.
59 ef d0 34 80 13 39 3d  YïÐ4€.9=
e7 24 a6 c5 26 88 59 ad  ç$¦Å&ˆY­
65 0f 36 9c 2b 3d 54 f4  e.6œ+=Tô
20 12 91 77 ca 0d 0e 78  ..‘wÊ..x
87 8c 6e a2 22 f3 64 e7  ‡Œn¢"ódç
0d 2d 93 bd c3 b0 ef 81  .-“½Ã°ï
7b 89 90 d1 d6 08 e5 6b  {‰ÑÖ.åk
bf 0d 17 2a 07 81 4d 59  ¿..*.MY
63 01 e9 76 a8 66 9c 1e  c.év¨fœ.

Instead, verification of manufacturer claims requires careful analysis of the system’s design and implementation (termed a “peer-review” or “security audit”), together with thorough testing.

This is a very different situation to the automobile industry, for example. If a car maker claims that its car has a 0-100km/h time of 5.0s, it is possible for the average consumer to test and replicate the claim, even if they need to go to a race track, use high octane petrol and enlist a highly skilled driver.

This lack of verifiability has led to three broad problems in the industry, which we will discuss in this multi-part article series:

  1. Snake oil – a product with questionable or unverified qualities or benefits, which can also include unsubstantiated claims, fraudulent goods or plain “quackery”
  2. Unintended security vulnerabilities due to flawed design or engineering mistakes, which have resulted in exploits in common protocols such as Wifi and Bluetooth. These will be discussed in part 2 of this series.
  3. Intentionally weakened or “back-doored” products – such as the infamous incidents with IBM Lotus Notes and Dual_EC_DRBG, discussed in part 3 of this series.

By reading this series and applying its recommendations, you can successfully protect yourself by choosing the correct solution.

We’ll start with the snake oil problem.

Snake Oil: What it is, and how to identify it.

The original snake oil

The Chinese labourers who toiled on building America’s railroads in the 1800’s originally introduced the real snake oil. It was used in Traditional Chinese Medicine (TCM) as a skin application to ease painful joints. The sale of medicines and elixirs became a mini-industry in those times. Competing medicine salesmen denounced the TCM snake oil as a fake cure, while proclaiming their own products as being the real thing. The term “snake oil” became synonymous with, at best, ineffective products and became commonly understood to refer to fake products sold by con men.

In the world of cryptography, snake oil refers to false claims or fancy technical lingo about a product and its performance used to make sales. While there are a handful of examples of fake cryptography solutions, the hidden risk of snake oil in cryptography solutions has its source primarily in well-meaning computer engineers combined with over-zealous marketing departments who oversell the capabilities of their products without fully understanding their true security limitations.

The real danger of snake oil: misplaced trust & false confidence

For products that are innocuous fakes but don’t do any consequential damage, your loss is simply the money you paid for that product.

However, in cryptography, the consequences can be far more severe. Placing your trust in the wrong system can lead to far greater consequential losses if your secrets are exposed.

Psychologically, this is natural. After all, when we trust a person, we tend to share more intimate information with that person than an untrusted person. The same goes with a cryptosystem: the more we trust it, the more willing we are to put sensitive information in it, and the higher the losses if that system is compromised.

An encryption product or methodology is only good until its first failure. In ancient cities that built sturdy walls to repel unwelcome individuals or enemies, the inhabitants felt safe until a resolute attacker breached the defenses and sacked the city. Likewise, the Titanic was deemed unsinkable until it met its first iceberg. Your encrypted data or communications are safe until your encryption is cracked: there are examples of encryption schemes that have remained secure for decades, and other encryption schemes that were cracked in just a few years. Obviously you want to avoid the latter outcome at pretty much all costs. But how?

True, tried and tested

Despite the marketing claims that a security product vendor may make, the accepted approach in cryptography is to only use well-studied encryption techniques, known as ciphers.

The longer that an approach has been in-the-public, and the longer it has been subjected to analysis and testing by the cryptographic community, then the more secure it can be regarded. It follows that old/tried/trusted approaches are good – but it is not a guarantee that they will never be cracked. However, when hundreds or thousands of the world’s top academics and experts have tried, and failed, to break a cipher, then this can provide far stronger evidence of security than anything a manufacturer may claim in its marketing.

An example of this is the AES (Advanced Encryption Standard) which has been rigorously studied for nearly two decades. Academics from around the world have worked to find weaknesses and exploits and, to date, only a small number of theoretical attacks (those which cannot be practically implemented) have been found, with these attacks having negligible tangible effect on the security of AES.

In contrast, a new product with fancy marketing can claim to be secure, but unless the underlying encryption methodology has been well studied and verified, the marketing claims may as well just be quackery.

Later in this article we investigate some of the warning signs that should alert you to potentially weak or bogus offerings, including the use of proprietary algorithms or “new” approaches.

What you can do to avoid snake oil

Unless you are a very experienced cryptographer or cryptanalyst (code breaker) it is highly unlikely that you will have the necessary background and training to assess a cryptography product. General background reading is recommended about the fundamentals of cryptography, together with analysis from experts.

Two of the foremost established experts are Phil Zimmerman, who created Pretty Good Privacy (PGP) for email encryption in 1991, and eminent cryptology practitioner and lecturer, Bruce Schneier. Both these experts often illustrate the dangers of snake oil and also talk about the security/encryption industry in general. Snake oil is such an established term that it even has its own Wikipedia entry which makes an excellent starting point for technical and non-technical readers.

We will summarise their essays here in this article while adding our own industry observations.

What is available out there?

There are many cryptography products on the market worldwide. Bruce Schneier and his colleagues Kathleen Seidel and Saranya Vijayakumar conducted a survey of the market in February 2016 that repeated similar research carried out by others in 1999. The objective was to approximate the number of available hardware and software encryption products, both free and commercial. They found 865 solutions from 587 different organizations worldwide.

That piece of research went on to summarise findings in related areas, such as the product quality in general, and any implications for U.S. policy governing such products. As a potential buyer of state-of-the-art cryptography solutions, you don’t need to know all of the technical aspects and business or legal implications of investing in different cryptography product in the market. Instead, the ability to filter out the bad options from the good ones inherently lies in the way the product is presented to buyers. If you are able to smell snake oil in this marketing presentation, you can avoid low quality solutions. Knowing the potential pitfalls, and raising your own awareness of the vagaries of the encryption market, reduces the risk of succumbing to the charms of the snake oil vendors.

At the same time, you’ll also have to embrace some unknown next-gen technologies, described in incomprehensible technical language, as legitimate solutions and not snake oil. For example, consider Intel’s next encryption product. The company is developing its Software Guard Extensions technology (SGX) that is slated for inclusion in the newest processor generation “Skylake”, where the encryption engine and keys are stored in the CPU and are inaccessible to other code running at higher privilege levels.

Knowing the warning signs

Here is a list of warning signs that might indicate that a product is snake oil.

The checkbox claim: “We use AES-256 and therefore our crypto is military grade.”
This is perhaps the most prevalent warning sign. Yes, AES-256 is undoubtedly a well-studied cipher, and so well trusted that the US Government mandates its use when encrypting Top Secret classified information. However, just because a given product uses AES-256 does not guarantee security! There are so many ways in which software developers can incorrectly use AES-256, and any such mistake will result in a fundamentally flawed system that provides almost no security. It still takes highly specialized expertise to integrate the use of AES-256 into an overall secure cryptosystem – expertise which we estimate is limited to a few thousand people in the world, compared to an estimated 18.2 million software developers. Another way to understand the flaw in this logic is looking at the sport of tennis: just because a player uses a Wilson tennis racquet does not mean he or she will play as well as Roger Federer. The racquet has to be used in precisely the correct way and cannot make up for a talentless player.
Country of origin
Another widespread and refutable claim is that the country of origin of a security product is important. This claim understandably plays to a consumer’s preconceptions. We have seen “Made in Germany” used as a marketing tool to imply quality – obviously leveraging off the preconceptions of automobile manufacturers like Mercedes-Benz, BMW and Audi. There is also some suspicion about cryptosystems originating from the U.S.A., no doubt caused by historical events where encryption has been intentionally weakened (as we explain in Part 3 of this article series). However, the blanket rule that a country of origin is related to the trustworthiness of the system belies the fact that good and bad crypto can come from any country, and it is the vendor’s level of expertise and transparency that are important, not the country of origin.
Inflated claims
Some vendor sites make many claims which must be critically considered. “Buyer beware”, is the watch-phrase; regardless, most technical buyers have limited ability to test such claims. However the veracity of these claims can be measured against the reputation of the vendor. What expertise and resources does the vendor have? A long established security firm may be more credible than a small software house, for example, but that does not guarantee that a specific product is secure. We will discuss this in Part 3 of this article series.
New approach
In the user guide for PGP published in 1991, Phil Zimmerman describes how software engineers can fool themselves into believing that they have invented the ultimate unbreakable algorithm. He openly admits that he fooled himself. Any product that the vendor describes as unbreakable is best avoided until enough time has passed for it to be tested by the market.
Every algorithm can be cracked, even if it may realistically take trillions of years of brute-force effort. Technically, there is no such thing as “unbreakable”, and if a product claims to be “unbreakable”, the vendor simply does not understand security. Your problem is to understand what it takes to crack the encryption, and then decide if the likelihood of a breach is adequate for your own specific requirements.
Proprietary/secret algorithms
It is only when algorithms are made public that they can be assessed by the cryptographic community at large. Any supplier claiming that their algorithms are secure because they are secret or “patent pending”, for example, is being disingenuous or at best naive. Algorithms must be in the public domain so that any weaknesses or mistakes can be detected and corrected. A product whose algorithms have not been subjected to the rigor of public scrutiny has not passed even the first basic peer test of its capabilities.
Mentioning “One-time pad encryption”
While a true one-time pad (OTP) is unbreakable, it is completely impractical for commercial use. A completely new pad consisting of genuinely random numbers is required for each encryption action.
Glowing reviews or endorsements
The old adage is true – “Never mind what they say, watch what they do”. Review sites, for example, usually generate revenue from click-throughs to products they review and perhaps endorse. Take this brief snippet from a review site: “None of the encryption programs we’ve reviewed have back doors, according to each company, so you can choose encryption software from our lineup with confidence.” Really?

Phil Zimmerman, on the other hand, received backhand endorsement from the US security establishment when he published PGP in 1991. PGP generated such a wave of alarm within the US Government that Zimmerman was subject to criminal investigation for almost two years for allowing such a powerful tool to be exported outside US borders. He himself reckoned that, as no other cryptographer had been subjected to that level of investigation, this represented a ringing endorsement of PGP.

The world of cryptography has often been compared to the pharmaceutical industry, especially from the customer’s perspective. Messages like “Trust us, we know what we are doing,” are intended to reassure the unwary, while deflecting true investigation and hard questions. Blinding us with science is another variant of this approach. Essentially, if you or I have great difficulty in distilling the true meaning from a torrent of technobabble, then it is likely the author is purposely concealing something. Of course, mentioning the pharmaceutical industry brings us back to the theme of this article – snake oil.

While this list of warning signs is not exhaustive by any means, it does present a baseline for potential buyers to identify legitimate encryption technologies from the low-quality alternatives tainted with snake oil. Avoid the snake oil products, and work with legitimate vendors to understand how their products can deliver the results they promise.

Our next article series will explore more about unintended security vulnerabilities due to design flaws. You’ll be surprised about which everyday technologies are actually insecure despite using encryption.

Leave a comment

Send us a message

The field is required.

Cant read the image? click here to refresh