SPMN Logo
Software Program Manager's Network
Support
www.spmn.com
Home
About Us
Products
Support
Lessons Learned
Frequently Asked Questions
Comments
E-mail SPMN
16 Critical Software Practices
 

SPMN Focus Team Lessons Learned

CONTENTS

1. Systems Engineering
2. Safety and Security
3. Continuous Risk Management
4. Requirements Management
5. Planning and Tracking
6. Products Required for Delivery
7. Interface Management
8. Visibility
9. Cost Estimation
10. Schedule Compression
11. Rework
12. Reuse
13. Architecture
14. Quality
15. Retaining Technical Staff
16. Approach to Achieving Higher SEI Rating
17. Integrated Product Teams
18. Configuration Management
19. Test
20. Metrics
21. Cost of Maintenance
22. Software Development Environment/Tool Utility
23. Contract/RFP Management
24. Commercial-off-the-Shelf (COTS) Products

Introduction

The SPMN Focus Team Initiative provided experts in technical and management practices for the development of large-scale software. This bulletin summarizes lessons learned from SPMN Focus Team visits with many different software-intensive development programs in all three Services. It describes problems uncovered in the last several months on several DoD software-intensive programs such as: (i) not implementing best practices that have particularly high leverage for a particular project at a particular time in its development; (ii) not properly implementing a best practice; and (iii) following one or more worst practices.

The normal SPMN approach identified project-specific changes in practices with the highest leverage for short-term improvements in one or more of the following: user satisfaction, development or maintenance cost quality, time-to-market, cost, and schedule predictability. Focus Team support also included follow-up assistance to the government project office and the contractor to help establish those changes to practices desired by their program managers.

Lessons Learned

1. Systems Engineering on Embedded Weapon System Programs

Systems engineering practices followed on many embedded weapon system development programs are less rigorous and less complete than software engineering practices. The result is that the software development part of system development begins with inconsistent, incomplete, needlessly risky, and highly volatile specifications. A partial list of systems engineering problems observed on more than one program follows.

No or very inadequate trade-off studies are conducted to reduce the risk of high-risk requirements, particularly high-risk software requirements.

There is no or negligible participation of software engineers in systems engineering.

Systems engineering processes and methods are selected indirectly by choosing a systems engineering CASE tool instead of selecting the processes and methods first based on the nature of the application, and then selecting the CASE tools that best implement the selected processes and methods.

There is no modeling and simulation of the system architecture to verify that the architecture will support system requirements for security, performance, safety, reliability, and fault tolerance.

No operational scenarios are developed as part of the system requirements that must be satisfied for system acceptance.

Interoperability with external systems and compliance with JTA, ATA, etc., is not a central focus in the development of the system architecture.

Perspective for partitioning the system (e.g., data, states, objects, functions) is not selected in coordination with software engineering for the purpose of minimizing the complexity of traceability of system requirements into software architecture.

There is no systematic and rigorous approach to making requirements consistent.

Defining requirements that are primarily met through system architecture design (e.g., security, fault tolerance, performance) is delayed until an incremental release much after the system architecture has been defined.

"Evolutionary" design of system architecture greatly increases the risk of excessive rework.

2. Safety and Security

Safety requirements are inadequately flowed down to the software components of the system. Achieving the needed security with the massive, heterogeneous, computer networks that are central to the 21st Century U.S. military is a very difficult technical problem at the leading edge of current technology.

Software hazard analyses are often not done and are almost never integrated as part of the software engineering process.

There is no systematic hazard analysis on the safety impact of modifications to requirements, design, or code.

The configuration management processes do not partition libraries by software criticality.

Most contracts don't address safety requirements following guidance of MIL STD 882D.

Software development plans do not include any specific processes or methods for critical software.

There is no process defined for testing critical software and ensuring that criticality hazards are closed and remain closed.

Meeting security and safety requirements is not a focus of architecture design.

Testing is significantly cut back to overcome schedule slips in safety-critical weapon system development.

3. Continuous Risk Management

Programs do not establish or follow an effective risk management plan.

In many instances, programs do not track or manage risks.

The cultures of both the DoD program office and the contractors discourage identifying risks to project success, frequently while claiming a robust risk management program.

The effectiveness of the risk identification process is severely limited by lack of participation by people with knowledge of past problems from similar programs and lack of broad and structured participation by people throughout the program office and the development team.

Risks are not characterized by probability of materializing and impact if they do.

Risks with high probability and high impact are not resolved in tasks off the critical path that have sufficient slack for workarounds, nor is a mitigation approach explicitly defined.

Contractors and programs tend to consider only those risks over which they have control. Failure to identify and address all risks leads to significant risks not being identified, characterized, or mitigated.

Risks beyond a short time into the future and beyond a narrow range of high-level cost and schedule risks are not identified.

Risk identification is not updated at regular intervals.

Only management people are involved in risk identification.

Systems engineering is ineffective in mitigating high-risk items with engineering trade studies.

Software is not included in systems engineering trade studies.

Knowledge of how to integrate risk management into the program activity network is lacking.

Reserves of funds and schedule are not set aside for those risks that do materialize into problems.

There is no periodic reporting of risk status to the government program manager.

While the process of risk management is reasonably straightforward, the cultural impediments to implementation are significant. These impediments include management resistance to potential visibility, staff unwilling to put themselves on report, and contracts and finance inability to recognize that risks are really not problems but potential to avoid cost/schedule quality and user satisfaction issues. The cultural impediments must be overcome through a lengthy process of training and mentoring of management at all levels. Typically, this has taken months - not weeks. The rewards that come however far outweigh the implementation difficulty, since, when they're overcome, programs become anticipatory rather than reactive.

Stop-light charts are used as a means to report the specific status of problems and risks within a program. There should be two forms of these charts presented at program reviews: a problem chart or a set of problem charts and a set of risk harts. The problem charts should truly treat problems as red if they are affecting the program and have not been successfully dealt with. risk charts, on the other hand, should be treated somewhat differently. All risks are red unless successfully mitigated or adequately addressed or retired within the program.

These risk stop-light charts should be defaulted to red unless the risk affects are successfully avoided as described above.

Initial Cost Estimates

Many programs that have been evaluated tend to initially estimate using a very optimistic method. Contractors desiring to get the contract may bid low, and many programs are naïve in estimating the Level of Effort for developing software. These initial estimates are often wrong since they are not based on a thorough analysis of requirements. The formula

Cost = Size x Complexity/Productivity

at the beginning of the program has three unknowns: size, complexity, and productivity. There are methods and estimating tools that can be used to determine the size of the software in function points given the system requirements. If the estimating organization does not have a database of actual productivity on past projects of comparable size and complexity, there are industry standards for the productivity in producing such systems. In new technologies (i.e., object-oriented development in component-based architectures, there may be insufficient data to develop a parametric costing model. In these instances, a systems engineering approach to estimation, which involves a bottom-up method of identifying tasks, establishing an activity network, and assigning resources, can provide an estimate of the Level of Effort for production. In any event, all initial cost estimates should be considered as potential high risk and should be reviewed at each program review.

4. Requirements Management

In several programs, the system design is not documented or traced to system requirements. This deficiency reveals itself during acceptance testing conducted late in the development phase of the project. Requirements must be documented to ensure adequate system testing. The design traceable to system design must be documented to assist in the correction of deficiencies found both in the system acceptance testing and during maintenance after the system is fielded. In many instances, the system is handed over to another program office and/or contractor after fielding. Documenting the system design and the requirements traceable to that design is vital to ensure cost-effective maintenance.

Many program visited have not documented requirements and have not traced requirements down to system design and code. This is the root of most system design problems and system failure to pass acceptance testing.

Excessively high requirements volatility is common. The DoD program office is often the source of this problem.

Real future users of the system do not participate in system requirements definition in a meaningful way.

Future users do not participate in system requirements definition following a structured approach oriented around the language of the user community.

User-interface prototypes, including screen navigation, are often not used as a structured way to help users define system requirements.

The "evolutionary development" life cycle model is frequently misused as an excuse to build a number of throwaway releases under the guise of a leading-edge technology for requirements definition. Programs do not understand how the "evolutionary development" life cycle model should be used in requirements definition.

Systems required to interoperate with external systems are being developed without directly working with each individual external system to define interface requirements and design. Instead, there is an unrealistic expectation that, if external interfaces are in compliance with the JTA, then interoperability will automatically be achieved.

Contractors, having their award fees based solely on the requirements of the current release, build architectures that are not capable of supporting future release requirements.

Defining business rules and identifying transactions are often overlooked in systems requirements definition. The importance of these in database design is not well understood.

Projects frequently do not verify requirements with structured peer reviews.

Projects do not use systematic methods to analyze consistency of requirements and correct defects in consistency.

Projects have difficulty keeping the requirements data in a requirements traceability database under configuration control and consistent with the output of other CASE tools.

Requirements traceability is often only done down to the CSCI level instead of the code unit level as it should be.

5. Planning and Tracking

Many programs visited have not conducted the planning necessary to ensure adequate system design, management control of the development, or management visibility as to cost and schedule. This lack of planning manifests itself in cost overruns, failed acceptance testing, slipped schedules, and the de-synchronization of planned integration of interfacing systems.

Many projects do not report earned value metrics.

The activity network used as the basis of earned value project cost and schedule status tracking is grossly inadequate in one or more of the following areas: high percent of tasks are level of effort, tasks have durations of many months and cost a significant percentage of total cost, tasks do not have unambiguous exit criteria, the effort to develop the software part of the system is not isolated in software-only tasks making it impossible to track the status of software development.

External dependencies are not properly integrated into the task activity network, which results in little management visibility of the impact of slips in external milestones.

Risk management is not integrated into the task activity network with risk resolution tasks.

Projects believe that the new acquisition streamlining regulations mean that the contractor should not be required to develop such things as a Software Development Plan, Configuration Management Plan, and Risk Management Plan.

Projects that develop their final software with a sequence of incremental releases often do not prepare an incremental release build plan up front that explicitly defines what capabilities will be added in each release.

6. Products Required for Delivery by the Contract

The contract often does not require delivery to the DoD program office of some products the government paid the contractor to produce that are needed for post-delivery software support. When these products are not delivered, it is excessively costly to bring on a different contractor for software support. The products include:

Software development files that contain, among other things, the rationale for design decisions

Output of CASE tools that corresponds to the delivered product

Test drivers and test results from successful tests

Contractor-developed tools in the software development environment

7. Interface Management

Systems that have to interface with other systems must coordinate their schedules as well. The slippage of one project schedule will have dire effects on other projects that have to interface with it. These schedules must also consider the schedules of the platform on which the system must be deployed. A schedule slip may cause the loss of a window of opportunity for testing on the designated platform. This could result in the decision to pull the system off the platform and onto another platform that may be rife with problems regarding compatibility, crew training issues, etc.

Problems will surely arise when two or more systems that have not interfaced together before are integrated. The sooner these problems are identified and resolved, the greater the success of the program.

There often is a conflict between the use of COTS, the needs for interoperability between systems, and the DoD interoperability standards.

Programs frequently do not identify all the external interfaces during initial systems requirements definition.

Two programs could be in compliance with the JTA, ATA, etc., yet not be able to interoperate. Programs do not seem to understand this.

The embryonic and evolving service and joint operational architectures are causing programs to move out on their own to develop what amounts to an operational architecture for their narrow part of the battlespace.

8. Visibility

Visibility of cost and schedule status of the software development part of embedded weapon system projects is often negligible.

The technical staff doing the work is not made aware of the big picture.

There is no visibility into the extent to which contractor technical staff positions are not currently filled and the new positions that need to be filled in the next six months.

Defects are not tracked until the beginning of testing.

Monthly or more frequent PM-level review of risk status is lacking.

An adequate set of metrics to give early warning of potential problems is not collected or reported to the PM at frequent intervals.

Even though metrics from past projects consistently show rework to consume 45 percent or more of total development cost, there is seldom any visibility on the amount of rework being done on a program.

Even though metrics from past programs consistently show that the cost of finding and fixing a defect increases very rapidly with the time between making and finding the defect, there is seldom any visibility of the time between making and finding defects on the current project.

Many organizations treat the reporting of risks as bad news and penalize programs as a result. This places software managers in a position of having to cover up reality even though the reality has not occurred. Management needs time to deal with risks and to successfully avoid the occurrence of the problems, which can destroy the project structure. The needed time can only be achieved through early identification and reporting of risk.

If the software manager feels that levels of management above him are going to penalize the program when these risks are reported, they will not raise them up the management chain, thereby removing time to effectively deal with the reality. Management should elicit and welcome the early warning and reporting of risks and reward programs that successfully avoid problems.
¨

9. Cost Estimation

Several programs visited have not conducted independent size and cost estimates by reputable consultants. Not conducting these estimates almost always results in schedule compression in development that is unrecoverable. This increases the risk of delay in system development, reduced functionality, and failed acceptance testing as the program cuts corners in the development process in an effort to recoup some schedule loss.

In one program, the development methodology had been altered without the estimator knowing about it. Changing the schedule and development method invalidates the effort estimates.

Knowledge of industry metrics related to productivity and major consumers of cost as indicators of potential gross errors in cost estimation is lacking.

Knowledge on how to estimate the size of the software to be developed is lacking.

There is frequently no bottom-up engineering cost estimate to reconcile with a top-down cost estimate based on a cost model.

Rule-of-thumb sanity checks on the validity of a cost estimate are not used.

Estimates of the cost of code reuse, including the integration of COTS software, frequently turn out to be significantly less than actual.

Cost is often based on a productivity that is higher by a large factor than any productivity the organization has ever achieved previously, on the basis that a new technology the contractor has never used before will be used and will result in this higher productivity.

10. Schedule Compression

There is no analysis of schedule compression because there is generally no knowledge of this major reason for project failure.

Knowledge of ways to get out of high schedule compression risk is lacking.

11. Rework

Rework is off the radar screen as a potential killer of cost and schedule.

First inspections are informal code walkthroughs despite the fact that metrics consistently show (i) impact of requirements and design defects is much greater than the impact of code defects and (ii) the cost of finding and fixing a defect grows very rapidly with the time between making and finding the defect.

The amount of rework done on the project is not tracked.

There is frequently no knowledge of how to conduct formal structured peer reviews of the requirements.

There is frequently no knowledge of modeling and simulation methods for validating how an architecture will meet performance, security, reliability, and safety requirements.

12. Reuse

Many projects are using/attempting to use object orientation in the development of their systems. Programs are tempted to apply object-oriented development methodology to legacy systems to increase functionality to meet projected requirements in an effort to save development cost and time. This is a reuse technique employed to save development cost. If the legacy system architecture does not support reuse then the success of such a technique is limited. Some projects have abandoned object orientation.

Many programs are looking to reuse to reduce development time and cost. SPMN has yet to see significant savings through the use of reuse technology because legacy system architecture does not support reuse.

The technical obstacles to successful reuse are not well understood.

Projects frequently find that the effort, and hence cost, to reuse code is much greater than anticipated.

There is little understanding on many programs of the role of architecture in making reuse successful.

Claims of COTS vendors that are contrary to reality tend to be believed.

Understanding of the costs and more stringent requirements of developing a software module for future reuse is lacking.

There is little understanding of the ways a COTS product has been designed to make it very costly to replace the COTS product in the future by a better-value product and how to mitigate this.

There is more concern about compliance with the DII COE, JTA, and data standards than in using these tools to achieve interoperability and lower the cost of maintenance.

A number of projects have based their software cost estimate on massive reuse of source code in an object-oriented design when that source code was not developed to execute in an object-oriented design.

Projects frequently do not understand that the potential savings from reuse of source code decline very rapidly with the amount of modification that has to be made.

13. Architecture

Requirements that should be primarily met with architecture are not defined until after the architecture is designed (e.g., performance, reliability, security, and safety).

The crucial role that architecture plays in reuse is not understood.

The basic architectural concepts of the DII COE are frequently not understood, such as decoupling persistent data, application code, and hardware to the maximum extent possible.

Logical static architectures are incorrectly specified for dynamic systems. High rework can result from not understanding how to evolve an architecture in an evolutionary development life cycle.

Some real-time tactical systems have processor and bus designs that result in the systems operating at 80 to 95 percent of capacity at normal operating situations. These systems will overload and fail in real tactical and causality scenarios.

There is poor understanding of object-oriented design--what it is, where it provides a payoff, the intellect and skills that are needed, and how to trace requirements from a top-down system design into a software object-oriented design.

The method used to develop an architecture is often the result of a CASE tool selection based on criteria such as staff experience instead of selecting the method that is best for the application and then finding the best CASE tool that enforces that method.

It is often not understood that the purpose of Ada 95 and C++ is to facilitate coding an object-oriented architecture with a number of features to significantly improve understandability, modularity, and reuse of source code.

14. Quality

Projects rarely have qualitative, let alone quantitative, quality goals.

Tactical real-time systems are not designed with consideration for safety as specified by MIL-STD-882D. In fact, many development contractors don't know that such guidance exists.

Often, the Software Development Plan does not incorporate an adequate formal structured peer review process to identify defects as soon as possible.

Although metrics consistently show that the negative impact of defects in requirements or architecture is much greater than the negative impact of defects in code, inspections are often limited to code.

Most projects do not model and simulate architecture for the purpose of finding architecture defects before detailed design is started. There is no understanding of the relationships between quality, productivity, and rework.

Many projects lack standards and conventions for coding.

Many projects do not compute code complexity metrics or set threshold values for these metrics that trigger analysis of the acceptability of the quality.

Most projects do not require quality standards designed to attack one of the major causes of excessive maintenance cost-poor understandability.

Projects do not set test coverage requirements for white-box testing of code units.

Projects seldom set explicit goals for defect removal efficiency and delivered defect density.

15. Hiring and Retaining Technical Staff

One of the mistakes that bidders make is not including in their level-of-effort estimates sufficient time required for trial and error associated with the integration of COTS. Trial and error takes time and should be factored in the schedule.

Contractors having several projects requiring similar skills will shift their key people from one project to another based on the priority among the projects. This "rob-Peter-to-pay-Paul" management technique wastes precious productivity because it takes a staff member time to get acquainted with the project he/she is being assigned to. This practice can also result in the robbed project having to stop production until the person comes back to the project. This is particularly true when a key/scarce skill is involved. Again, this results in the project running behind schedule and contributes to employee burnout as people have to work overtime for extended periods.

A popular management technique is to "matrix" the staff to several projects. Managers must weigh the pros and cons of this management technique, given the scenario provided in the previous bullet.

Organizations are not tracking what their employees do in a given day and are not ensuring that their activities are leveraged toward productivity. Personnel assigned to produce a product in support of the project are vectored off to support administrative duties that could be passed on to non-key personnel. Activities that should be reviewed for possible delegation include meeting attendance, correspondence preparation, documentation of procedures, and briefing preparation. This may require a shift of the current management ploy to save staffing costs by eliminating administrative personnel, but it may result in greatly increased productivity without hiring already scarce skills.

Even though programs are having a very hard time hiring and retaining staff, no one is quantitatively aware of the acute shortage of people in the U.S. with the skills needed to develop software.

The government does not require the contractor to report the time-phased profile of new people that will have to be hired to perform work on the contract, nor does the contractor voluntarily disclose this to the government.

Contracts are written with fixed labor rates that prevent the contractor from paying the salaries to the software technical staff that are needed to compete with commercial companies.

Software people who work for hardware companies are treated as second-class employees, with the result that the good software staff have little loyalty to the company.

Management drives the technical staff to work hours much in excess of 40 hours per week for extended periods to compensate for management mistakes and/or deliberate proposals/contracts with unrealistically low costs or high schedule compression, with the result that a high percentage of the staff quit the company before the project is completed.

A number of DoD contractors have experienced technical staff turnover rates in excess of 50 percent in one year.

Defense contractors are generally unwilling to pay people with critical software skills compensation that comes close to what these people can easily get with companies in the commercial sector, and most government contracts and contract awards will not allow the contractor to pay market rates to these people.

Defense contractors and government program offices frequently provide office space and equipment that is very inferior to that provided by companies in the commercial sector who are hiring people with high-demand software skills away from the Defense contractors.

Defense contractors frequently invest much less in training the software staff in the technical skills needed to perform on the current project than employers in the commercial sector.

16. Approach to Achieving Higher SEI Rating

Approach focused on defining and documenting a large number of high-level processes that are not sufficiently detailed that two groups following the same process will perform it in the same way.

The primary purpose is getting a higher SEI level evaluation because contractors believe this increases their chances of winning DoD source selections. Contractors do not consider whether a selected process has been consistently demonstrated to make a significant improvement on one or more of development cost, maintenance cost, schedule, user satisfaction, cost, and schedule predictability.

Contractor SEPGs that define the processes called for in each SEI level are staffed by people with little hands-on experience in the development of large-scale software.

17. Integrated Product Teams

Each IPT goes its own way and feels free to ignore such things as the system specification.

Individuals do not have responsibilities and are not accountable.

Individuals do not have the authority to meet their responsibilities.

Large numbers of IPTs want a person from an understaffed software group to participate in the IPT on a regular basis.

18. Configuration Management

The most common deficiency with configuration management observed by Focus Teams is insufficient breadth and change control of the developmental baseline. CM frequently focuses almost exclusively on deliverable documents and source code (the functional, allocated, and product baselines) and imposes inadequate discipline on the evolving intermediate products, COTS products, and software development environment. The result is that the development team works with unstable input with high defect density and frequently is unable to go back to earlier products to fix defects found later in the program.


For example, when a design error is found while writing code, the design error should be corrected by returning to the CASE tool that originally was used to develop the design. Proper CM of the developmental baseline would control the CASE tool output that contained the design to be fixed. This output file would be input to the CASE tool, the design corrected, and the impact of this fix propagated down through all other configuration-managed intermediate products back to the code where the design error was found.

Programs frequently don't understand the essential role software configuration management plays in a successful project.

Projects often don't recognize the need to manage all shared project information including non-deliverables.

The change control board's structure used by software projects is often overly cumbersome. It does not adequately assess, prior to authorizing a change, the impacts of a proposed change or the risk and cost of making these changes.

Knowledge of how to integrate CASE tools into CM is lacking.

Programs that rely on code and/or software deliverables from other systems must become a part of those systems' CM process. The complexity involved with effectively tracking becomes too large if it is not tracked from the source of the code and often times the baseline is lost. This results in extremely expensive, even cost-prohibitive, life cycle maintenance of the system.

19. Test

Some programs use a "big bang" integration test approach with a small number of builds for integration testing instead of the common best practice used in the commercial world of frequent integration builds and smoke test.

Contractors and program offices do not perform negative testing as the system is being built. The first time the system is negatively tested is when it goes to acceptance testing by the end user. Additionally, the end user needs to be involved in the development and testing early on. The system has to perform as the operator, not the technician who built it, will use it. Tests must be conducted against the system requirements. Adequate testing of the system as it is being developed and formal peer reviews will minimize the cost and schedule slip associated with rework. On real-time control system, dynamic testing must be conducted to stress the processors.

White-box testing of code units is often not done.

Requirements for white-box-test level of coverage are not often specified, nor is test coverage measured.

There is no two-way traceability between test cases and system/segment requirements, which leads to the need for excessive regression test.

Undocumented patches to binary code are made during integration testing.

System testing of real-time systems is based on static requirements with no dynamic scenarios.

Cutting back on execution testing continues to be a common way to deal with a schedule slip, even with safety-critical weapon systems.

Build plans for a sequence integration build that will optimize items, such as the amount of test driver software that will have to be developed, are often not written.

Many programs do not understand and do not focus on the rigor for operational test during engineering stages. As a result, the development and assurance processes focus on nominal conditions and doesn't consider or bullet proof the system against anomalistic events, unexpected conditions, or various error conditions. They do not conduct "negative testing" and, as a result, many paths in the software are not executed during developmental testing. The first time those paths are even run is during the unstructured free play characteristic of operational testing. Software-intensive programs should consider these analogous paths from the beginning, bullet proof the system against them, and conduct traceable negative testing augmented by inspections of recovery procedures.

Many testing projects tend to ignore the need to test design against requirements. These interim white-box tests will stabilize the design prior to demonstrating the capabilities of the system from an external standpoint to support the requirements it was built for. All requirements-based tests should be preceded by a white-box test that ensures the validity of the design and software internals. Although appearing longer, this is in fact the shortest way to complete testing.

20. Metrics

The threshold values for when metrics become indicators of a potential problem are not known.

When earned value cost/schedule status metrics are collected, the program office allows the contractor to collect these metrics in a way that greatly reduces their reliability as predictors of potential future cost and/or schedule problems. For example, a high percentage of tasks are level of effort with no measurable exit criteria. A high percentage of tasks have durations of many months with earned value credit given for engineering 'guestimates' of percentage complete.

Although many contractor organizations are defining processes for the purpose of getting a higher SEI level rating, few of these organizations are committed to collecting metrics, such as productivity, for the purpose of determining if newly introduced processes result in better performance in one or more of user satisfaction, cost, schedule, quality, and cost/schedule predictability.

Requirements volatility is rarely measured.

Quality metrics such as defect removal efficiency and source code cyclomatic complexity are rarely collected.

21. Cost of Maintenance

Little attention is paid to the maintenance cost implications of actions during development.

There is little understanding of the high-leverage items over the cost of maintenance that exist only during the development phase.

22. Software Development Environment/Tool Utility

Two programs did not properly analyze tool needs or map tool capabilities before using them to manage the developmental process. This lack of forethought resulted in inadequate management of the system (software and hardware) being developed. This problem not only complicates the development process it also complicates the maintenance of such systems after it they are fielded. This complication significantly increases the cost of developing and maintaining a system. Furthermore, efficiency of tool use is degraded when lack of forethought is given to how tools interface with each other in support of a project.

The following is an example of the process to determine if developmental tools are being properly used:

1. Draw a diagram showing every automated tool being used by the project including software design, software lower CASE, requirements traceability,
configuration management, document production, etc. This diagram should show the output/input relationships between these tools both from the point of view of what is automated and what is not.

2. Show what tool output is under CM and to what extent CM automation supports the CM developmental baseline control.

3. Show to what extent all this tool output and the paper documents of document creation tools and source code are data integrated so they are traceable (forwards and backwards) and kept consistent. Show how this data integration is accomplished.

4. Review the above-listed work and test it for validity through several mental scenarios

Tools in the software development environment from different vendors are seldom data integrated in a way that automatically keeps the data output by all tools consistent.

CASE tool output that has passed task exit criteria is usually not effectively configuration managed.

Staff using a CASE tool for the first time usually experience a lower productivity than before the tool was used until the users understand the method enforced by the tool.

Many programs look at the technology of software development as an end rather than a means to get to an end. It is often lost on projects that the technology tools are to assist in the delivery of a product rather than be an end product of the project. The result of this is that technology is often applied that doesn't match the characteristics of the application or that requires significant training of the staff or is not supported adequately through tools.

Technology application should be backed up by current plans, by tradeoffs that evaluate delivery requirements against tools, techniques, and methodology, and on an evaluation of staff capability versus the technologies being used.

23. Contract/RFP Management

Several programs visited have entered into a time-and-materials contract with the vendor for the development of the system. There are no provisions to provide adequate visibility as to the system design, productivity, cost, and quality of the system being built. Furthermore the development plans are not a deliverable item to the program office. This provides the contractor with no incentive to deliver a quality product on time. If the product is unsatisfactory, the program office has no recourse but to task the contractor to do the work again; paying the contractor for the additional effort. Should the program office decide to hire another vendor to produce the system, that contractor has to start over from the beginning in design and development because the prior contractor is not contractually obligated to provide system design or development documentation.

When the system is fielded, the program office is required to continue working with the development contractor because other vendors do not have the design or configuration documentation necessary to build and install upgrades to the system. SPMN has developed 16 Critical Software Practices for Performance-based Management that program offices can include in their contracts to require contractors to provide visibility into the production and development of a system and establish managerial disciplines necessary to manage the planning and development of a system.

Request for Proposal (RFP)

Several of the programs receiving Focus Team support are in the early phases of project development. This is an ideal time for SPMN involvement. SPMN was able to advise the program offices about the pitfalls that lay ahead in system design, requirement determination, and proposal evaluation. (SPMN provides questions to Sectional L of the RFP and has assisted programs in evaluating the bidders' responses. SPMN has also provided support in the development of Section M.)

On many programs, acquisition reform is producing results quite contrary to intended results. When a large development contract is given essentially sole source to one of the many winners of an earlier "contract vehicle" contracted as a time-and-material, fixed-labor-rate, or deliver-order contract there is little economic incentive for the development contractor to be highly productive and to deliver the product on schedule. In fact, revenue and profit are increased with cost overrun and schedule slip.

Often the contractor behaves accordingly. The mandate in paragraph D.1.d. of DoD Directive 5000.1 is not met: "To ensure an equitable and sensible allocation of risk between government and industry, PMs and other acquisition managers shall develop a contracting approach appropriate to the type of system being acquired." The government assumes essentially all of the risk. This situation is exacerbated by a misconception in many program offices that the new acquisition regulations preclude the government program office from imposing a requirement for strong earned value cost and schedule control by the contractor.

24. Commercial-Off-the-Shelf (COTS) Products

The government is trying to leverage the use of COTS to reduce project schedule, cost, and development time. In reality, COTS does not uphold the same quality standards that government-developed systems do.

Warranties associated with COTS are voided when the system goes to the field.

COTS products do not appear to hold up as well as government-developed systems do.

COTS products are subject to modification without notification or documentation. This has resulted in the replacement part not functioning as expected in the system. Because the modification is unknown to the system repair technician, many hours are wasted troubleshooting the system. Because the military represents a very small percentage of the COTS market, COTS vendors are not likely to change their way of doing business to meet the military's needs.


About SPMN | Support Services | 16 Critical Software Practices | Products

Comments | Lessons Learned | F A Q | Contact Us


Copyright © 2011-2017 Pro-Concepts LLC All Rights Reserved.