Thursday 8 October 2015

Backdoor (computing), cryptosystem and algorithm

The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[4] They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.[5]
A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. A famous example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password (his dead son's name) which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).
Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.
Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit distributed silently on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data gathering agents, since both surreptitious programs they installed routinely contacted central servers.
A sophisticated attempt to plant a backdoor in the Linux kernel, exposed in November 2003, added a small and subtle code change by subverting the revision control system.[6] In this case, a two-line change appeared to checkroot access permissions of a caller to the sys_wait4 function, but because it used assignment = instead of equality checking ==, it actually grantedpermissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack.[7]
The mystery of why RSA would use a flawed, NSA-championed algorithm as the default random number generator for several of its encryption products appears to be solved, and the answer is utterly banal, if true: the NSA paid it to.
Reuters reports that RSA received $10m from the NSA in exchange for making the agency-backed Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) its preferred random number algorithm, according to newly disclosed documents provided by whistleblower Edward Snowden.
If that figure sounds small, that's because it is. Tech giant EMC acquired RSA for $2.1bn in 2006 – around the same time as the backroom NSA deal – so it seems odd that RSA would kowtow to the g-men so cheaply.
But according to Reuters, at the time, things weren't looking so good for the division of RSA that was responsible for its BSafe encryption libraries. In 2005, those tools brought in a mere $27.5m of RSA's $310m in annual revenue, or just 8.9 per cent.
By accepting $10m from the NSA, as Reuters claims, the BSafe division managed to increase its contribution to RSA's bottom line by more than a third.
It wasn't long after RSA switched to Dual EC DRBG as its default, however, that security experts began to question whether this new algorithm was really all it was cracked up to be. In 2007, a pair of Microsoft researchers observed that the code contained flaws that had the potential to open "a perfect backdoor" in any encryption that made use of it [PDF].
Such concerns remained largely within the province of security experts until earlier this year, when documents leaked by Snowden confirmed the existence of NSA-created backdoors in encryption based on RSA's technology.
In late September, RSA itself even warned its customers that they should choose a different cryptographically secure random number generator while it reviews its own products for potential vulnerabilities. OpenSSL, the software library used by countless applications to perform encryption and decryption, has also written off Dual EC DRBG. How the NSA came to be involved with the algorithm is discussed in detail here by computer security expert Bruce Schneier.
For its part, however, RSA maintains that it never conspired with the NSA to compromise the security of its products, and that if the government knew how to break RSA's encryption, it never let on.
"RSA always acts in the best interest of its customers and under no circumstances does RSA design or enable any back doors in our products," the company wrote in a canned statement. "Decisions about the features and functionality of RSA products are our own." ®=
In January 2014, a backdoor was discovered in certain Samsung Androidproducts, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.[8]

Harder to detect backdoors involve modifying object code, rather than source code – object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading – in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source.
Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves – for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change.
Because object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting the compileritself – so that when it detects that it is compiling the program under attack it inserts the backdoor – or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to Karger & Schell (1974), and was popularized in Thompson (1984), entitled "Reflections on Trusting Trust"; it is hence colloquially known as the "Trusting Trust" attack. See compiler backdoors, below, for details. Analogous attacks can target lower levels of the system, such as the operating system, and can be inserted during the system booting process; these are also mentioned in Karger & Schell (1974), and now exist in the form of boot sector viruses.[9]

Earlier this week a number of organizations, companies, and individuals wrote a letter to the President of the United States in which they expressed their worries about the suggestion from US officials that companies should refrain from providing products with strong encryption unless ‘those companies also weaken their security in order to maintain the capability to decrypt their customers’ data at the government’s request’.
Let me be clear: I strongly support this letter.
The arguments made in the letter are not new. Encryption is one of the enablers of the Internet economy that protects users from all sorts of harm. Building in encryption "backdoors" would actually decrease the trust in the Internet and therefore its utility.
Of course, there are also cases in which encryption is put to bad use. The most common examples are use of encryption by terrorists and criminals. The issues that law enforcement face with the application of encryption by the bad actors are not trivial. But I do not think that creating backdoors will eliminate any of those concerns.
One of my theses about the success of the Internet is that the technology on which its been built is highly democratized. The sharing of knowledge and implementation through Open Source and Open Standards brings the agility and innovation that makes the Internet flourish: The Internet technology is based on highly democratized knowledge. Highly democratized knowledge means that the bad actors can also get their hands on technology and wreak havoc.
The same goes for encryption. While it is not easy to build an implementation of an encryption system, the mathematical theory on which encryption is built is public knowledge and there are quite a few open-source reference implementations available of encryption technology. As a result, even if vendors are forced to build backdoors, the bad guys will still be able to use unbreakable encryption. I am pretty sure that any law or regulation that forces a backdoor in a products will not apply to the ransomware that some unfortunate victim may find infecting their computers.
As an engineer, I don't like building vulnerabilities into systems and that is essentially what a backdoor is. Anyone thinking that those backdoor-vulnerabilities will not be found, no matter how well protected they are, seems to deny the curiosity of smart technologists. Curious technologists, security researchers, hackers, however you want to call them, find vulnerabilities in software on a daily basis. (The logjam vulnerability being the example of the day. Ironically, according to the researchers that weakness is partly a result of earlier attempts to restrict the utility of encryption.) And while a lot of these vulnerabilities are responsibly disclosed, we must assume that curiosity and clue is also bestowed on some of the bad actors. Inserting backdoors is simply a path to leaving us all unprotected.
At the Internet Society we aspire to pervasive implementation of end-to-end encryption. We realize that aspiration comes with a set of difficult technical, economic, and policy questions. Technical questions have to do with the ability to manage traffic, cache content, and implement bona fide security policies. Economic questions have to do with transit costs absent the ability to cache and perhaps around data monetization. Policy question focus on whether law enforcement agencies can do their work versus the security of individuals around the globe. The answers will not be easy, especially since there doesn’t seem to be a choice with encryption; it’s either all or nothing and both choices may lead to lives threatened.
Let me end on a more positive note: Even though encryption technology is highly democratized it is not easy to build and implement. There are numerous pitfalls that can all lead to potential exploits by bad actors. How can you have the highest certainty that you have the most secure implementation for the encryption box you are going to be relying on for e.g. your e-commerce application, or the storage of your customers credit cards?
Open and peer-reviewed standards, designs and implementations provide high assurance that such vulnerabilities do not exist. We support our aspiration for pervasive end-to-end encryption by supporting the CrypTech projectThe project sets out to build a trustable piece of hardware that can be used to store keys and perform encryption for all sorts of applications that rely on encryption, e.g. any e-commerce application.  I would ask you to visit and support that project.  Similarly, efforts to make TLS usage more common and to deploy additional layers of trust through technologies such as the DANE protocol are critical for encryption to be available for all.
We shouldn't shy away from some difficult challenges but strong encryption will continue to be a reality. I believe that open development, wide deployment, and usage of strong encryption makes the Internet more trustworthy and is critical to realize the opportunities and full potential of the Internet.

In recent months top American and British political leaders have been arguing that there should be no encrypted communication system that they cannot unlock whenever they deem it necessary to do so. Officials like the director of the National Security Agency, Michael Rogers, and Prime Minister David Cameron have said that unless technology companies grant them the technical equivalent of a back door to snoop on encrypted communications, the world’s bad guys will “go dark” and become untraceable.
Now, 13 prominent encryption and information security experts have responded with an important report that explains in plain English why what Mr. Rogers and Mr. Cameron are asking for would be terrible for the Internet.
To start, giving governments back-door access to encrypted technologies like email servers, video chats, online banking services and so on would make those systems much more vulnerable to hacking. Furthermore, giving encryption keys to governments would increase the risk of those keys being stolen by criminals and spies from other countries.
There is yet another big problem: How should technology companies decide which governments they should give back-door access to? If the United States and Britain have access to, say, all of Google’s encrypted servers, the governments of China, Russia and many other nations will surely demand similar privileges. Or should Western tech companies simply stop doing business in some foreign countries?
This is hardly a new debate. In the 1990s, the Clinton administration proposed requiring the tech industry to use the Clipper chip, a device that would help the government decrypt communications. Businesses, technical experts and civil liberties groups defeated that effort by showing that hackers and criminals could easily exploit that system.
Not having such an invasive back door into Internet-based communications systems has hardly hurt the government’s ability to conduct surveillance. In fact, Edward Snowden revealed that American and British agencies have had extensive access to our communications for years. If anybody has been kept in the dark, it is ordinary citizens.
The Congress is forced now to struggle with that, and they’re going to have to listen to these various arguments about protection and safety on the one hand and preservation and privacy and confidentiality on the other,” Cerf said, as reported by The Hill.
The Obama administration has been trying to force companies like Google and Apple to create defects in encryption so the FBI and other government agencies can gain access to people’s information; this despite mounting criticism over the plan – a criticism that’s shared by Cerf.








No comments:

Post a Comment