Sunday, May 29, 2005

Identifying Best Practices

Highlights from CrossTalk June 2005, Identifying Your Organisation's Best Practices.

The desire to identify best practices is driven by business goals and objectives: customer satisfaction, reduced time to market, decreased cost, better product quality, minimized defects delivered, improved performance.

The organisational strategy to achieve these goals are often centered on quick fix approaches, which are usually not effective:
  • cost reduction: decide to outsource software development to offshore provider
  • time to market: deliver fewer features
  • defect minimizations: often ignored
The key to successful performance management is performance measurement. Provide quantitative evidence that those goals and objectives have been achieved.

A basic measurement model from Practical Software and Systems Measurement suggests that an organisation follows three steps:
  • identify the needs of the organisation
  • select measures
  • integrate the measurement into the software development process
The basic measurement model includes the collection and analysis of both quantitative and qualitative elements:
  • quantitative elements: size, effort, duration, defects
  • qualitative elements: data points used to evaluate levels of competency regarding process, methods, skills, automation, technology, and management practices.
The qualitative data identify the attributes that contribute to high or low yields of performance. The baseline values are compiled from a selection of measured projects. Results vary significantly. The qualitative data give the opportunity to determine why certain projects have outperformed others.

The article concludes with 3 case studies with some remarkable results: projects with productivity 3 times higher than industry average and 5 times less defects. Impact of CMM level 3 practices: productivity increase 132%, time-to-market reduced by 50%, cost reduction by 40%, defect density reduced by 75%.

A challenge with this approach is the reliability and accuracy of the metrics. If people or organisations feel threadened by the measurement system they will most probably not report the correct figures. It is essential to use the measurements only for improvement activities and not for evaluating and punishing individuals and teams, and to communicate this clearly. Measurements and CMM are a means to reach goals, not goals themselves.

Friday, May 20, 2005

Software Architecture Review

Good Software Architecture:
  • is critically important for successful software development
  • is the key framework for all technical decisions
  • provides the foundation for reuse
  • is essential for fast time-to-market solutions
Too many projects either fail or require significant rework late in their schedules. Typically a project of 100.000 source lines of code can save about US$ 1 million by identifying and resolving problems early.

Software Architecture reviews:

- identify project problems before they become costly to fix:
  • avoid costly rework near the end of the project
  • avoid bringing the shipping date in danger
  • favour architectural sound improvements above ugly fixes hacked in late in the project
  • provide timely information so that better informed decisions can be made
- improve the organisations' quality:
  • help identify best practices and socialise them across the organisation
  • leverage experienced people by using their expertise to help others
  • promote cross-product knowledge and learning
  • identify knowledge gaps in areas where errors frequently occur
- address completeness & consistency

Essential activities for software architecture reviews are having a review meeting, preparing the review using a checklist, and reporting/tracking/closing issues.

Architecture Reviews, an invaluable part of a companies' quality improvement process.

Read the complete paper:
"Architecture Reviews: Practice and Experience"
by J.F.Maranzano, S.A.Rozsypal, G.H.Zimmerman, G.W.Warnken, P.E.Wirth, D.M.Weis
IEEE Software, March/April 2005

Friday, May 13, 2005

Testing Object-Oriented Systems

What is software testing ?

According to Robert V. Binder [1]
The design and implementation of a special kind of software system: one that exercises another software system with the intend of finding bugs. This system automatically applies and evaluates the tests.

Manual testing, of course still plays a role. But testing is mainly about the development of an automated system to implement an application specific test design.

Test design:
1. Analyze the responsibilities of the (sub)system under test.
2. Design test cases based on this external view.
3. Add test cases based on code analysis, suspicions, and heuristics
4. Develop expected result for each test case, choose approach to evaluate the pass/no pass status of each test case
Test automation
Test execution:
- minimal operational implementation under test
- execute testsuite, each test pass/no pass
- use coverage tool, evaluate reported coverage
- if needed, develop additional tests to test uncovered code
- stop testing: coverage goal met + all tests pass

Test design is best seen as gray box hybrid testing, primary focus is on
responsibility based testing: spec-oriented, behavioral, functional (black box)
and a secondary focus on
implementation based testing: structural, statement or branch coverage based (white box)

This is to be done at different scopes (bottom-up): method, class, cluster, subsystem, system. Skipping testing at smaller scopes is a common to schedule pressure. Unfortunately, testing at broader scope is subsequently often hampered by inoperable parts. A good remedy is applying test-driven development, where the tests are written before implementation of the functionality.

Integration Test: when testing at a particular scope is complete you integrate with other tested elements into the next scope level: a complete system or subsystem of software units. Exercise the interfaces among the units to demonstrate that the units are collectively operable.
A major goal is to stabilize the system so that responsibility-based testing may proceed smoothly.

Integration testing is a process, rather than an event or phase. It begins early, takes place at all scopes, and is repeated in each development increment. The process of integration test design often reveals errors, omissions, and ambiguities in the requirements and architecture. Make testability a design goal!

System Test: functional testing, performance testing, stress + load testing

Typically unit and integration testing are fault-directed, while system testing is conformance directed.

[1] Read more on object-oriented testing & patterns in the book: Testing Object-Oriented Systems: Models, Patterns, and Tools, by Robert V Binder (2000), which provides an in-depth coverage of testing. An authoritative guide to designing and automating test suites for OO applications.

Friday, May 06, 2005

Secure Software

From John D. McGregor's article on Secure Software in the Journal of Object Technology:

Poorly written software will have more security vulnerabilities than well written software. Incorporate security as a quality consideration early in the development life cycle.

McGraw’s trinity of trouble:[McGraw 04]
Three problems that contribute to increasing security problems:
  • Ubiquitous network connections
  • Easily extensible systems
  • Increasingly complex systems
Qualities of software products that reduce the probability of security problems:
Correct: the ability of a software product to satisfy its functional requirements. If the program is not correct then it becomes difficult to know whether the program’s failure to meet expectations is due to a security breach or just built-in incorrectness.
Robust: percentage of time that a product can continue to function in the face of unusual conditions. Robustness is achieved by allowing for “other” cases at every opportunity. That is, the design should anticipate that not all cases are covered by the specification.
Reliable: percentage of the operating time that the product performs requested functions correctly. Quality assurance activities such as conducting active design reviews, establishing and checking compliance with design and coding standards, and testing the product code contribute to the reliability of the resulting product.

Extensible: software is designed to be extensible, holes are created that are vulnerable to attack. The technique for making extension points secure will vary with the binding time (at design time or at execution time).
Complex: security vulnerabilities will be more likely to exist and to be hidden from the usual testing. Decompose it away. The key is to start small and grow as the product comes together.
Consistent error handling: provide a consistent error handling scheme. The point to be made here is that the error handling needs to be visible at the appropriate design level.
Robust data structures: you can’t overflow a hardware buffer. Why should it be different with a software buffer?
Misuse and Abuse case: describe misuse and abuse cases as an approach to helping stakeholders think about possible scenarios that need to be defended against [Hope04]
Plan of action: attribute-driven design approach (ADD) [Bass00]:
  • A clear definition of the quality attribute
  • A framework for reasoning about the quality
  • A set of architectural tactics that enhance the quality
The secure quality attribute has to be as carefully engineered as every other quality upon which our strategic goals depend.

Read the complete article:
John McGregor: “Secure Software", in Journal of Object Technology, vol. 4, no. 4, May-June 2005, pp. 33-42

Building Secure Software: How to Avoid Security Problems the Right Way,By John Viega, Gary McGraw (2002)Recommended reading: Building Secure Software: How to Avoid Security Problems the Right Way,By John Viega, Gary McGraw (2002), good on both the theory and practice of secure software design, for both manager and programmer. First chapters describe how vulnerabilities creep into the software, while the later chapters explain the secure coding techniques.

[Bass 00] Felix Bachmann; Len Bass; Gary Chastek; Patrick Donohoe; and F. Peruzzi. The Architecture Based Design Method (CMU/SEI-2000-TR-001, ADA37581). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2000
[Hope 04] Paco Hope, Gary McGraw, and Annie I. Anton. Misuse and Abuse Cases: Getting Past the Positive, IEEE Security and Privacy, IEEE Computer Society, 2004.
[McGraw 04] Gary McGraw. Software Security, IEEE Security & Privacy, IEEE Computer Society, 2004