• Ei tuloksia

2 Understandi ng secure software devel opm ent 144software developm ent144

2.3 Remarks on maintenance and testing

The product focus in software development accounts for how software maintenance changes in the software products market. In traditional tailored software development, there usually is just one release and the fixes, additional features and other evaluations are provided as part of maintenance. In market-driven development this work is done by making new releases of the same product194. Even though the COTS software vendors separate corrective maintenance – including patches and workarounds which are often provided to licensees at no cost beyond a subscription fee – from other forms of software maintenance, the changes needed to smooth out poorly done but operable software functions become the basis of new releases for which vendors charge additional, often highly profitable licensing fees.

In other words, maintenance takes the form of versioning or supporting services, for which the customer has to pay separately.

Most of what was once maintenance in traditional software development now forms the basis of a product’s next release and thus

Understanding secure software development 83

195 Sawyer, A market-based perspective on information systems development, p. 101.

196 Schneier, Computer Security: Will We Ever Learn?, title “No one is paying attention because no one has to”, paragraph 5. Similarly in Harju, Kustannustehokas ohjelmiston luotettavuuden suunnittelu ja arviointi, p. 89.

197 Schneider, Trust in Cyberspace, p. 89-90.

198 Kaner, The Impossibility of Complete Testing, p. 1.

serves to generate additional revenue for the vendor over a number of years195.

The combination of release-oriented development and patching and, especially, the use of new releases as an important form of maintenance lead to a reliance on customer feedback as a significant, or even primary, quality assurance mechanism in market-driven development. This is reasonable economic behaviour as one of the most appreciated and loud advocate of information security Bruce Schneier so effectively puts it in his famous quote “…90% to 95%

of all bugs are harmless. They're never discovered by users, and they don't affect performance. It's much cheaper to release buggy software and fix the 5% to 10% of bugs people find and complain about.”196 This has implications for quality and security. Press coverage is not guaranteed to be accurate and may not convey the implications of the problem being reported. The problems that concern only a smaller user community do not get fixed. Feedback from customers and the press, by its very nature, occurs only after a product has been distributed. Reliance on market forces to select what gets tested and what gets fixed is haphazard at best and is surely not equivalent to performing a methodical search for vulnerabilities prior to distribution.197

The development goal of achieving software that is ‘good enough’

– not perfect (flawless) – for the specific situation and customer needs also applies to testing. A lot of bugs are detected during testing procedures, but not all errors can be found. As Cem Kaner (among others) explains it, it is impossible to fully test a program: the testing procedure can only show the presence of errors in the program; it cannot show the absence of errors198. Additionally, testing cannot

Regulating Secure Software Development 84

199 This has been emphasised by Mary Jean Harrold in Testing: A Roadmap, p.

63, while mapping the role of testing in the future of software engineering.

200 Harrold, Testing: A Roadmap, p. 63; Harju and Koskela, Kustannustehokas ohjelmiston luotettavuuden suunnittelu ja arviointi, p. 10.

201 Similarly in Pipkin, Information Security, p. 75.

202 See, e.g., Pipkin, Information Security, p. 74; Viega and McGraw, Building Secure Software, p. 39.

show that the software has certain qualities. Despite these limitations, testing is widely used in practice to create confidence in the quality of software199. This is why software is shipped with bugs even after the verification and validation stage. However, the bugs that remain do not prevent software from being ‘good enough’.

The verification and validation stage can be one of the more time consuming, expensive and challenging phases of the software life cycle. It has been estimated that about 50% of the development costs of a software product are caused by testing and debugging200. In market-driven development (a very competitive market), the design team is often under tremendous pressure to complete this phase.

Market pressures contribute to reducing the time spent on testing before releasing software to users. More sophisticated testing and debugging procedures would prolong introduction of a new product (to the market) as well as add costs and thus decrease the probability of commercial success. In sum, software is being released and implemented without adequate testing201.

There are special problems in testing for security. The testing procedures must be changed to focus on security issues (e.g. testing for unexpected input, probing a system like an attacker or otherwise looking for exploitable weaknesses) in order to find the particular vulnerabilities as the practitioner of security tend to point out202. Functional testing (treating the component as a black-box and testing the interfaces of the components) does not find security flaws. Unlike almost all other design criteria, security is independent of functionality.

Functional testing is good at finding random flaws that, when they happen, will cause the computer program to behave oddly. Security flaws have much less spectacular effects; they are usually invisible

Understanding secure software development 85

203 This is the argument is repeated at least by security practitioners such as Bruce Schneier (Secrets & Lies, p. 335-336), Donald Pipkin (Information Security, p. 70) and John Viega and Gary McGraw (Building Secure Software, p. 42).

204 In market driven development the trend has for long been to enlist the user community to help in finding errors by making early releases (beta versions) available to interested users and by freely distributing incremental updates (e.g., patches) to the software. Beta testing has traditionally been part of the pre-release testing (before pre-release to the wider customer base and not just to the interested beta-testers). Recently, however, software has increasingly been released to be public as a ‘final’ version, with an implicit assumption that the end-user, and not just the willing beta-tester, will act as the ultimate ‘beta’

tester. Rather than implementing a full quality assurance program, the vendor relies extensively on users to report vulnerabilities.

Proprietary software developers, as contrasted to open source software (OSS) development projects, especially those with COTS business model and market driven development practices of interest in this study, are increasingly turning to their customers for help in the debugging task (to remove bugs from the software). Proprietary software developers understand just as well as do open source software developers the value and potential of users in testing. The principle of release early and often makes sense in both settings. As noted earlier, proprietary software has been seen as going towards a plausible promise option in terms of the business model and thus moving closer to the OSS development method. This means that when the product is first made available to users, it is not finalized in terms of functionality or quality. The product is gradually improved in terms of quality and functionality in the subsequent releases partly due to the feedback from the public.

unless they fall into the wrong hands. Security testing is not about randomly using the software and seeing if it works, but deliberately searching for problems that compromise security203. The costs of testing, together with the time needed, increase when security is concerned, which is why software rarely end up being ‘good enough’

in terms of security even after testing.

The method used in market-driven development – enlisting the user community to help in finding errors by making early releases (beta versions) available to interested users and by freely distributing incremental updates (e.g., patches) to the software – does not enhance security, since no amount of beta testing will uncover all security flaws204. This is mainly due to the need for sophisticated security-specific testing to find vulnerabilities in the first place. Knowledge of the testing methods and skills to conduct them are not typically

Regulating Secure Software Development 86

205 John Viega and Gary McGraw raise the problems of red teaming for security in Building Secure Software, p. 42-43. Unfortunately, this method is still sometimes praised as an efficient way of discovering vulnerabilities. For example, in noting correctly that faster and often less sophisticated testing procedures allow for a shorter time-to-market, thus leading to a competitive advantage, the Finnish Ministry of Transport and Communications in a report concerning the need for national information security strategy in 2001 also stated that end-users are able to find even information security vulnerabilities quite fast (MINTC, Kansallisen tietoturvastrategian tarve Suomessa [Does Finland need a national information security strategy?], p 7).

206 Computer Science and Telecommunications Board (CSTB), Cyber-security Today and Tomorrow: Pay Now or Pay Later, p. 14. CSTB is a division of the U.S.

National Research Council.

207 Opinion of the Economic and Social Committee on the Commission Communication on Network and Information Security: Proposal for a European Policy Approach (COM(2001)298 final), Official Journal C 048, 21.02.2002, p. 33-41, paragraph 3.2.1.3.13.

208 Harju, Kustannustehokas ohjelmiston luotettavuuden suunnittelu ja arviointi, p. 51.

widely known in the broader user community. Of the parties searching for vulnerabilities, hackers and professional tiger teams may have the skills and motivation (at least to some point), but are in no way able to do it quickly and efficiently for every software product205. As pointed out by the Computer Science and Telecommunications Board in 2002, software vendors should “[s]trengthen software development processes and conduct more rigorous testing of software and systems for security flaws, doing so before releasing products rather than use customers as implicit beta testers to shake out security flaws”206. This has also been emphasised at the European policy level by the Economic and Social Committee in its opinion on the Commission Communication on Network and Information Security COM(2001)298 final207.

A further problem in using customers (either end-users or software developers using components) as testers is that they typically do not have access to the component’s source code and to the specific documentation of the production process, especially where COTS is concerned208. COTS vendors seeking to protect their intellectual property usually sell components as binaries, without source code

Understanding secure software development 87

209 The legitimate reason for this is that developers want to keep source code forms of their products and other human-readable documentation as trade secrets as, e.g., Pamela Samuelson and Suzanne Scotchmer point out from the U.S. perspective in The Law & Economics of Reverse Engineering, p. 1608, similarly to Alfred Meijboom’s notion from the European perspective in Legal Rights to Source Code, p. 107. The procurement policies of the customers of safety-critical systems (utilities, government, etc.) have traditionally required software vendors to disclose enough details to evaluate their processes and products for safety. However, these policies are not compatible with current component vendors, who are faced with the risk of IPR loss as pointed out by Devanbu and Stubblebine in Software Engineering for Security, p. 233. Not only are software components delivered in “black boxes” as executable objects without source code or design documentation, but usually also the de-compilation back to source code is forbidden in licenses (note that this is forbidden in copyright law, but not in trade secret law). Often source code can be licensed, but the cost may make the practice prohibitive as Jeffrey Voas point out in 1998, Certifying Off-the-Shelf Software Components, p. 53. The open source and free software movements offer the source code to the users and also commercial vendors are, at least to some degree, starting to show their source code to trusted partners (e.g. governments) also for security purposes.

210 It needs to be pointed out that code evaluation is a necessary but not a sufficient means for assessing security as emphasised by John Viega and Gary McGraw in Building Secure Software, p. 115. Security related vulnerabilities can be found even without a look at any code (source of binary) – in worst cases symptoms of a security problem are noticed during the course of normal use (Viega and McGraw, Building Secure Software, p. 70-73).

211 Such methods have been reported by Jeffrey Voas already in 1998, Certifying Off-the-Shelf Software Components, p. 53-59.

or design documentation209. The lack of availability of component source code limits the testing that the component user can perform (white-box techniques in evaluation of components is not possible)210. Even though some traditional security analysis is made impossible for the component user or other customer by the absence of source code, there are ways for the user to verify and determine the quality and security of COTS components that do not require extensive disclosure of the source code or the accommodated design documentation. There are approaches that treat the component as a black box, and employ extensive testing to ensure that the system functions as desired; no additional effort or disclosure of intellectual property rights (IPR) are required from the COTS vendor211. Grey-box

Regulating Secure Software Development 88

212 Devanbu and Stubblebine, Software Engineering for Security, p. 234.

213 Reifer, Boehm and Gangadharan in Estimating the Cost of Security for COTS Software, p. 180, note that current models for predicting the effort involved in integrating COTS software products into applications do not include security as a cost driver. While providing means for estimating the costs of security for COTS software they also (idem. p. 183-184) estimate on the basis of their analyses the percentual increases both of the effort and the duration of the assessment activity (process by which COTS components are selected for use; 12-20 percent to effort and 5-10 percent to duration), to tailoring (activities undertaken to prepare the selected COTS packages for use;

8-18 percent to effort and 5-10 percent to duration) and to glue code development (development and testing of the connector software, which integrates the COTS components into the larger application; 0-75 percent to effort and 0-33 percent to duration).

214 Voas, Certifying Off-the-Shelf Software Components, p. 55.

215 Whether these problems can be solved by technological means, or are they more about simple economic decisions made by the developers of both the components and their users, thus possibly requiring some sort of regulatory

verification systems use interactive cryptographic techniques or rely on tamper-resistant hardware to help the vendor to provide evidence of the quality and security of the component (disclosure of enough details of the verification practice to convince a sceptical component user) without disclosing too much information that could endanger its IPR212. There are also different sets of criteria on which components are evaluated and verified (e.g. ITSEC, TCSEC a.k.a.

Orange Book , Common Criteria).

Even though the additional testing effort required by black-box approaches contributes towards the overall quality of the component user’s entire system, their use is limited because the additional testing is likely to be time-consuming and expensive213. An additional limitation on the use of black-box approaches is that they do not reveal unknown, malicious functionality214. Grey-box approaches have only very recently appeared and need a lot of additional research.

However, with additional research into the ways in which component users can test systems efficient techniques and tools are likely to emerge that will help such users test their applications more effectively215.

Understanding secure software development 89

intervention, is a matter of dispute.

216 This argument is made in one of the first ever textbooks on secure software development, Viega and McGraw, Building Secure Software, p. 17.

217 Schneider 1999, p. 89-90; Viega and McGraw, Building Secure Software, p. 17;

CSTB 2002, p. 13

218 This argument has been made in the relatively recently emerged economics and information security -discussion, e.g., by Ross Anderson, Cryptology and Competition Policy-Issues with 'Trusted Computing’, p. 14, and in Anderson, Why Information Security is Hard, p. 3. The argument has, however, been made already by Andrew Odlyzko already in 1998, Smart and stupid networks, p. 38-46.

The use of customer feedback in place of other quality control mechanisms does allow a software producer to externalise costs associated with product testing. Customers, in turn, have to invest time and money in finding and possibly reporting errors, in installing patches, and they also have to suffer from the costs of failures. The tactic of using customers as serious (perhaps involuntary) testers is, at best, a dubious one from the point of view of security216. It is surely not equivalent to performing a methodical search for vulnerabilities prior to distribution217.