Software project management is a process of reasonably allocating and using various resources to achieve the established goals through a series of activities such as planning, organization and control. It is an effective measure to comprehensively evaluate the development work in the process of software development.

directory

Software project Management

Software sizing

One, line of code technology

Function point technology

Workload assessment

I. Static univariate model

1. Kloc-oriented estimation model

2. Fp-oriented estimation model

Dynamic multivariable model

3. COCOMO2 model

schedule

Estimate development time

Second, the Gantt chart

3. Engineering network

Iv. Estimate the project progress

Five, maneuver time

Critical path

Personnel organization

Personnel organization overview

Democratic programmer group

Second, the main programmer group

Modern programmer group

Quality assurance

One, software quality

2. Software quality assurance measures

1. Necessity of technical review

2. The walkthrough

3. Review

4. Program correctness proof

Software Configuration Management

Overview of software configuration management

First, software configuration

1. Software configuration items

2. The baseline

Software configuration management process

1. Identify objects in the software configuration

2. Version control

3. Change control

4. Configure audit

5. Status report

Capability maturity

Capability Maturity Model overview

CMM


Hello! I’m little Grey ape, a storyteller, sharer, and no tech programmer,

Today, the big bad Wolf is here to talk to you about what goes on in the software project management phase besides software coding. Wish everyone advance promotion from technical guy to product director!

 

 

Many people who just entered the software industry or are learning have such a feeling that the coding stage is a key step in software development, but in fact, it is not so. If we compare the process of software development to building a bridge, the coding stage is just the construction process of construction workers adding bricks and tiles. More aspects are software design, management, maintenance and other stages, which is also an essential stage and process in the process of software development.

 

Software project management involves seven stages: software scale assessment, workload assessment, schedule planning, personnel organization, quality assurance, software configuration management and capability maturity model.

 

Below, the big bad Wolf will talk about the tasks and assessment methods of each stage. (The article is longer, and you can learn it slowly later.)

 

Software project Management

 

First, what is software project management?

The so-called management is through planning, organizing and controlling a series of activities, rational allocation and use of various resources, in order to achieve the established goals of the process.

 

 

Software project management begins before any technical activities and throughout the software life cycle.

The software project management process begins with a set of project planning activities based on effort estimates and completion deadlines.

In order to estimate the project effort and deadline, you first need to size the software.

 

 

Software sizing

 

The common methods of software sizing are line of code technique and function point technique. These two evaluation methods have their own advantages and disadvantages. Next, the big bad Wolf and everyone will analyze them respectively:

 

One, line of code technology

Line of code technique is a relatively simple quantitative method to estimate software scale.

Estimate the number of lines of source code required to implement a feature based on previous experience and historical data in developing similar products.

The estimates are more accurate when there is historical data for similar products developed in the past. Adding up the number of source lines needed to implement each function gives the number of source lines needed to implement the entire software.

 

Estimation method:

Estimates are made by experienced software engineers.

Everyone estimates the minimum size (a), maximum size (b), and most likely size (m) of the program.

After calculating the average value, and of the three sizes respectively, the following formula is used to calculate the estimated size of the program:

 

 

Unit: LOC or KLOC.

 

Advantages of line of code technology:

  • Code is the “product” of all software development projects, and it is easy to count lines of code;
  • There are a lot of references and data.

Disadvantages of line of code technology:

  • The source program is only a component of the software configuration, so it is not reasonable to measure the software scale by the source program.
  • Different languages require different lines of code to implement the same software;
  • Not applicable to non-procedural languages.

 

 

Function point technology

Function point technology estimates software scale based on the evaluation results of software information domain characteristics and software complexity.

This method measures software scale in units of function points (FP).

 

1. Information domain features

The function point technique defines five characteristics of an information domain:

  • Input items (Inp) : The number of items that a user enters into the software to provide application-oriented data to the software.
  • Output items (Out) : The number of items the software outputs to the user to provide application-oriented information to the user.
  • Query number (Inq) : A query is an online input that causes the software to produce some kind of immediate response in the form of an online output.
  • Maf: The number of logical master files.
  • External interface (Inf) : The total number of machine-readable interfaces through which information is transmitted to another system.

Each feature is assigned a function point according to its complexity, namely, information domain feature coefficients A1, A2, A3, A4 and A5, as shown in the following table.

 

 

2. Steps for estimating function points

(1) Calculate the unadjusted function points UFP

First, each feature in the product information domain is classified as simple, average, or complex, and each feature is assigned a function point based on its level.

Then, the following formula is used to calculate the unadjusted function points: UFP= A1 ×Inp+ A2 ×Out+ A3 ×Inq+ A4 ×Maf+ A5 ×Inf

Where, AI (1≤ I ≤5) is the characteristic coefficient of the information domain, whose value is determined by the complexity level of the corresponding characteristic, as shown in Table 13.1.

(2) Calculate the technical complexity factor TCF

This step measures the impact of 14 technical factors on software scale. The full technical factors are listed in Table 13.2 and are represented by Fi(1≤ I ≤14).

Each factor is assigned a value from 0 (nonexistent or has no impact on the software scale) to 5 (has a significant impact), depending on the characteristics of the software.

 

 

Then, the following formula is used to calculate the comprehensive influence degree DI of technical factors on software scale:

 

 

Technical complexity factor TCF is calculated by the following formula:

TCF = 0.65 + 0.01 × DI

Since DI is between 0 and 70, TCF is between 0.65 and 1.35.

(3) Calculate function points FP

FP is UFP times TCF

 

Technical advantages of function points:

Independent of the programming language used, more reasonable than the line of code technique.

Technical disadvantages of function points:

When judging the complexity level of information domain characteristics and the influence degree of technical factors, subjective factors are relatively large and strongly dependent on experience.

 

 

 

Workload assessment

 

Workload evaluation usually includes static univariate model, dynamic multivariable model, COCOMO2 model,

It is important to note that empirical data for most estimation models are derived from sample sets of a limited number of projects.

No estimation model can be applied to all types of software and development environments.

After making clear the above two points, the next is the evaluation method of each model.

 

 

**one **, Static single Variable model

 

The overall structure is as follows:

E is equal to A plus B times ev of C

Where A, B, and C are constants derived from empirical data, E is the man-month workload, and EV is the estimated variable (KLOC or FP).

1. Kloc-oriented estimation model

(1) Walston_Felix model

E = 5.2 x (KLOC) 0.91

(2) Bailey_Basili model

E = 5.5 + 0.73 x (KLOC) 1.16

(3)Boehm’s simple model

E = 3.2 x (KLOC) 1.05

(4)Doty model (applicable when KLOC>9)

E = 5.288 x (KLOC) 1.047

2. Fp-oriented estimation model

(1)Albrecht & Gaffney model

E = 13.39 + 0.0545 FP

(2)Maston,Barnett and Mellichamp model

E = 585.7 + 15.12 FP

 

Dynamic multivariable model

Dynamic multivariable models, also known as software equations, treat workload as a function of software scale and development time.

The form of dynamic multivariate estimation model is as follows:

E = (LOC) B0.333 / P) 3 x 4 (1 / t)

Where E is the workload in person-months or person-years; T is the duration of the project in months or years; B is the special technology factor, which increases slowly with the need for testing, quality assurance, documentation, and management techniques. For smaller programs (KLOC=5 to 15), B=0.16, and for programs over 70 KLOC, B=0.39;

P is the productivity parameter, which reflects the impact of the following factors on workload:

  • Overall process maturity and management level;
  • The extent to which good software engineering practices are used;
  • The level of programming language used;
  • The state of the software environment;
  • Technology and experience of software project team;
  • The complexity of the application system.

When developing real-time embedded software, the typical value of P is 2000. When developing telecommunication systems and system software, P=10000; For commercial applications, P=28000. You can derive values for productivity parameters applicable to the current project from historical data.

 

3. COCOMO2 model

COCOMO stands for Constructive Cost model.

In 1981, Boehm first proposed the COCOMO model in The Economics of Software Engineering.

The COCOMO2 model proposed by Boehm et al. in 1997 is a revised version of the original COCOMO model, which reflects the accumulated experience in cost estimation in more than ten years.

COCOMO2 presents three levels of software development workload estimation models, which are more and more detailed in software details when estimating workload.

Estimation models of the three levels are as follows:

** Application system Composition model: ** This model is mainly used to estimate the amount of prototyping effort, and the name of the model implies the heavy use of existing artifacts in prototyping.

** Early design Model: ** This model applies to the architecture design phase.

Post-architecture model: ** This model is suitable for the software development phase after the architecture design is completed.

This model expresses the software development effort as a non-linear function of KLOC:

 

 

Where E is the development effort (in man-month unit), A is the model coefficient, KLOC is the estimated number of source code lines, B is the model index, and FI (I = 1-17) is the cost factor.

 

Each cost factor is assigned a value (called the workload factor) according to its importance and how much it affects the workload.

Compared with the original COCOMO model, the cost factors used by COCOMO2 model have the following changes.

  • Four new cost factors have been added, namely the required reusability, the amount of documentation required, the continuity of people (i.e., the stability of people), and multi-site development.
  • Two cost factors (computer switching time and use of modern programming practices) were omitted from the original model.
  • Some cost factors (analyst competence, platform experience, language and tool experience) have increased their impact on productivity (the ratio of maximum to minimum workload coefficients), while others (programmer competence) have decreased their impact.

 

To determine the value of model index B in the workload equation, COCOMO2 uses a much more sophisticated B-grading model, which uses five grading factors Wi(1≤ I ≤5), each of which is divided into six grades from very low (Wi=5) to very high (Wi=0), and then calculates the value of B using the following formula:

 

 

Therefore, the value of B ranges from 1.01 to 1.26. Obviously, this classification model is more precise and flexible than the original COCOMO model.

 

The five grading factors used by COCOMO2 are described below:

** Project precedents: ** This grading factor indicates how novel the project will be to the development organization.

** Development flexibility: ** This grading factor reflects the increased effort required to implement pre-determined external interface requirements and to develop the product early.

** Risk exclusion: ** This rating factor reflects the proportion of significant risks that have been eliminated.

** Project team cohesion: ** This grading factor indicates the difficulty that developers may have when collaborating with each other.

** Process Maturity: ** This grading factor reflects the project organization’s process maturity as measured by the Capability Maturity Model.

 

 

schedule

Progress plan overview

Software project scheduling distributes the estimated project effort over the planned project duration by assigning the effort to specific software engineering tasks and specifying the start and end dates for completion of each task.

The schedule will evolve over time.

 

Estimate development time

The equations for estimating development time are similar across models, for example:

  • Walston_Felix model

T = 2.5 e0. 35

  • The original COCOMO model

T = 2.5 e0. 38

  • COCOMO2 model

T = 3.0 e0. 33 + 0.2 x (1.01 b)

  • Putnam model

T = 2.4 e1/3

Where E is the development effort (in person-months) and T is the development time (in months).

Experience tells us that individual productivity declines as development teams grow, so that development time is not inversely proportional to the number of people working on development. There are two main reasons for this phenomenon:

As groups get larger, each person needs to spend more time discussing problems and coordinating efforts with other members of the group, thus increasing communication overhead.

If you add team members to the development process, the overall productivity of the project team will decrease rather than increase for an initial period of time.

Brooks’ rule: Adding people to an already delayed project only makes it more delayed.

There is an optimal project size Popt that has the highest total productivity. The optimal size of the project team is 5.5 people, i.e. Popt=5.5.

 

Second, the Gantt chart

The Gantt chart is a time-honored and widely used scheduling tool.

 

Example:

Old wooden house painting project (15 workers, 5 tools each)

 

 

Main advantages of Gantt diagram:

  • The Gantt diagram can vividly depict the task decomposition, as well as the start and end time of each sub-task (job).
  • It has the advantages of intuitive simplicity, easy to master and easy to draw.

Gantt diagrams have three major disadvantages:

  • The dependence of operations on each other cannot be explicitly described;
  • The key part of the schedule is not clear, and it is difficult to determine which part should be the main attack and control object;
  • The potential part of the plan and the size of the potential is not clear, often resulting in the waste of potential.

 

3. Engineering network

The engineering network is another commonly used graphical tool for scheduling. It can also depict the breakdown of tasks and the start and end times of each task.

It explicitly depicts the dependencies of each job on the other.

Engineering network is a powerful tool for system analysis and system design.

 

Symbol:

 

 

Iv. Estimate the project progress

Necessary information for engineering network:

** The estimated amount of time each job will take: ** The length of the arrow has nothing to do with the duration of the job it represents; the arrow represents a dependency, and the number above it represents the duration of the job.

** earliest time EET: ** The earliest time the event can occur.

** Latest LET: ** The latest time at which the event can occur without affecting the completion time.

Maneuver time: ** The actual start time can be later than the scheduled time, or the actual duration can be longer than the scheduled duration without affecting the project end time.

 

 

The earliest calculations:

The earliest moment of an event is the earliest time that the event can occur.

In general, the earliest time of the first event in the engineering network is defined as zero, and the earliest time of other events is calculated from left to right in the engineering network in the order of event occurrence.

The earliest EET is calculated using the following three simple rules:

■ Consider all operations that enter the event;

■ For each job, calculate the sum of its duration and the EET of the initial event;

■ Select the maximum value of the sum as the earliest EET of the event.

 

Calculation at the latest:

The latest time of the event is the latest time that the event can occur without affecting the completion time of the project.

Traditionally, the latest moment of the last event (completion of the project) is its earliest moment. Other events are calculated at the latest on the engineering network from right to left in the direction of the reverse flow.

Calculate the latest LET using the following three rules:

■ Consider leaving all operations of the event;

■ Subtract the duration of each job from the latest moment of the end event of the job;

■ Select the minimum of the above differences as the latest LET for the event.

 

 

Five, maneuver time

Some operations have a degree of leeway — the actual start time can be later than the scheduled time, or the actual duration can be longer than the scheduled time, without affecting the completion time of the project.

Maneuver time =(LET) end -(EET) Start – Duration = bottom right – top left – Duration

Considering and taking advantage of the flexible time in the engineering network when making the schedule plan, it is often possible to arrange a schedule that saves resources and does not affect the final completion time.

 

 

Critical path

Events with the same earliest and latest time (jobs with maneuver time 0) define the critical path, which is represented by a bold arrow in the figure.

Events on the critical path (critical events) must occur on time, and the actual duration of the jobs that make up the critical path (critical jobs) cannot exceed the estimated duration; otherwise, the project cannot end on time.

 

 

Personnel organization

Personnel organization overview

To succeed in software development, project team members must interact and communicate in a meaningful and effective way.

The manager should organize the project team reasonably, so that the project team can have high productivity and complete the work according to the scheduled schedule.

In addition to seeking better ways to organize, the goal of every manager is to build cohesive teams.

A highly cohesive group consists of a group of people who are so closely knit that the whole is greater than the sum of their parts.

 

Democratic programmer group

An important feature of the democratic programmer group is that the group members are fully equal, enjoy full democracy and make technical decisions through consultation. Therefore, communication between team members is parallel, and if there are N members in the team, there are n(n-1)/2 possible communication channels.

The size of the programming team should not be too large, or they will spend more time communicating with each other than programming.

Democratic programmer groups are often organized informally, meaning that while there is a nominal leader, he performs the same tasks as the rest of the group.

 

Advantages of democratic programmer groups:

■ Team members have a positive attitude towards finding bugs, which helps to find bugs faster, leading to high-quality code;

■ Team members enjoy full democracy, high cohesion and strong academic atmosphere, which is conducive to overcoming technical difficulties;

■ The organizational approach is very successful when most of the group members are experienced.

Disadvantages of democratic programmer groups:

If most members of the team are not technically proficient or inexperienced, there will be no clear authority to guide the development of the project, and there will be a lack of necessary coordination among the team members, which may eventually lead to project failure.

 

Second, the main programmer group

Reasons for this organization:

Software developers are mostly inexperienced;

There are a lot of transactional work in the programming process, for example, the storage and update of a large amount of information;

Multichannel communication takes time and reduces programmer productivity.

 

Two important features of the main programmer group:

1. Specialization. Each member of the group performed only those tasks for which they had been professionally trained.

2. Hierarchy. The lead programmer directs the team and takes overall responsibility.

A typical lead programmer group consists of a lead programmer, a backup programmer, a programming secretary, and one to three programmers.

 

 

Division of labor of core members of the main programmer group:

The main programmer is not only a successful manager but also an experienced, skilled and capable senior programmer, responsible for the architecture design and detailed design of key parts, and responsible for guiding other programmers to complete detailed design and coding work.

The backup programmer should also be skilled and experienced, assisting the lead programmer and taking over when necessary (for example, when the lead programmer is ill, traveling, or “jumping ship”).

The programming secretary is responsible for all transactional work related to the project, such as maintaining the project database and project documentation, compiling, linking, and executing source programs and test cases.

 

The main programmer group is unrealistically organized:

First, the lead programmer should be a combination of a senior programmer and a good manager. Often, there is a shortage of both successful managers and skilled programmers.

Second, backup programmers are harder to find.

Third, programming secretaries are also hard to find.

 

Modern programmer group

The actual “lead programmer” should be shared by two people:

A technical leader who is responsible for the team’s technical activities and participates in all code reviews because he is responsible for all aspects of code quality;

An executive responsible for all non-technical management decisions can’t participate in code reviews because his job is to evaluate programmers’ performance. The leader of administration should know the technical ability and performance of each team member during regular scheduling meetings.

 

 

Because the size of the programmer group should not be too large, when the software project is large, programmers should be divided into several groups. This diagram depicts the technical management organization structure, and the non-technical management organization structure is similar.

 

 

Another way to combine the best of democratic and master programmer groups is to use decentralized decision-making where appropriate. Conducive to the formation of unimpeded communication channels, in order to give full play to the enthusiasm and initiative of each programmer, brainstorming to overcome technical difficulties.

 

 

 

 

 

Quality assurance

One, software quality

In a nutshell, software quality is “the degree to which software complies with explicitly and implicitly defined requirements.”

Software quality is the degree to which software complies with clearly stated functional and performance requirements, development standards clearly described in documentation, and implied characteristics that should be present in any professionally developed software product.

The definition highlights the following three main points:

■ Software requirements are the basis of measuring software quality, inconsistent with requirements is not high quality.

■ Specified development standards define a set of guidelines to guide software development, and failure to follow these guidelines will almost certainly result in poor software quality.

■ Often, there is an implicit set of requirements that are not explicitly described. If the software meets the explicitly described requirements, but not the implicit requirements, then the quality of the software remains questionable.

The main factor affecting software quality is the measurement of software quality from the perspective of management. These quality factors can be divided into three groups, which reflect three different tendencies or opinions of users when using software products. These three tendencies are: product operation, product modification, and product transfer.

 

 

2. Software quality assurance measures

Software Quality Assurance (SQA) measures mainly include:

■ Non-execution based tests (reviews or reviews), mainly used to ensure the quality of documents produced in the various stages prior to coding;

■ Execution based testing (software testing), which needs to be carried out after the program has been written, is the last line of defense to ensure software quality;

■ Proof of program correctness, using mathematical methods to rigorously verify that the program conforms exactly to its description.

 

1. Necessity of technical review

The significant advantage of formal technical reviews is that software errors can be found early, preventing them from being propagated to later stages of the software process.

Statistics show that 60-70% of errors detected in large software products are specification or design errors, and formal technical reviews are up to 75% effective in finding specification and design errors. By being able to detect and eliminate most of these errors, reviews can significantly reduce the cost of subsequent development and maintenance phases.

Formal technical review is a kind of software quality assurance measure, including specific methods such as Walkthrough and inspection. It has fewer steps than censorship and is less formal than censorship.

2. The walkthrough

A walking check group consists of four to six members. The group leader guides the group members through the document, trying to find as many errors as possible.

The task of a walkthrough group is only to flag errors, not to correct them; the task of correcting errors should be done by the group that wrote the document.

The maximum walking time should be 2 hours. This time should be used to find and flag errors, not to correct them.

There are two main ways of walking check:

■ Participant-driven approach

■ Document-driven

3. Review

■ Review: Review the document to the review team by one of the members responsible for writing it.

■ Preparation: Reviewers read the document carefully.

■ Review: The review team went through the entire document.

■ Rework: The author of the document is responsible for resolving all errors and problems listed in the review report.

■ Follow up: The team leader must ensure that every problem raised is satisfactorily resolved.

Normally, the review team consists of four people. The team leader is both the administrator of the inspection group and the technical person in charge. The review process not only has more steps than walking, but every step is regular.

4. Program correctness proof

In the early 1960s, people began to study the technology of program correctness proof and put forward many different technical methods.

Manual proof of program correctness may have some value for evaluating small programs, but in proving the correctness of large software, it is not only too much work, more importantly, it is easy to contain errors in the process of proof, so it is not practical. For practical purposes, it is necessary to study automatic systems that can prove the correctness of programs.

Programs that demonstrate the correctness of PASCAL and LISP programs have been developed and are being evaluated and improved. For now, these systems can only evaluate smaller programs.

 

 

Software Configuration Management

Overview of software configuration management

Software configuration management is a set of activities that manage change throughout the life of software.

Specifically, this group of activities is used to: ① identify changes; ② Control changes; ③ Ensure that changes are properly implemented; Report changes to those who need to know.

The goal of software configuration management is to make changes more correct and easier to adapt to, reducing the amount of work required to make changes when necessary.

 

First, software configuration

1. Software configuration items

The output information of the software process can be divided into three categories:

■ Computer programs (source code and executable program);

■ Documentation describing computer programs (for technical personnel or users);

■ Data (contained within or outside the program)

These items make up all the information generated during the software process and are collectively referred to as software configurations, and these items are software configuration items.

2. The baseline

Baselines are a software configuration management concept that helps us control change without seriously impeding reasonable change.

IEEE defines a baseline as a specification or intermediate product that has been formally reviewed, can be used as a basis for further development, and can only be changed through a formal change control process.

In short, a baseline is a software configuration item that has been formally reviewed.

 

Software configuration management process

Specifically, software configuration management consists of five tasks: identification, version control, change control, configuration audit, and reporting.

1. Identify objects in the software configuration

To control and manage software configuration items, you must name each configuration item individually and then organize them in an object-oriented way. Two types of objects can be identified:

■ Basic objects are “text units” created by software engineers during analysis, design, coding, or testing.

■ An aggregation object is a collection of base objects and other aggregation objects.

2. Version control

Version control combines procedures and tools to manage different versions of configuration objects created during software engineering.

With the help of version control technology, users can specify the configuration of the software system by selecting the appropriate version.

3. Change control

Typical change control processes are as follows:

■ First assess the technical gains and losses of the change, possible side effects, overall impact on other configuration objects and system functionality, and estimated modification costs.

■ Generate an engineering Change order for each approved change. The objects to be modified are “extracted” from the project database, modified, and the appropriate SQA activities applied.

■ Finally, the modified object is “committed” to the database and the next version of the software is created with the appropriate version control mechanism.

4. Configure audit

Measures are usually taken from the following two aspects:

■ Formal technical review, focusing on the technical correctness of the modified configuration object.

■ Software configuration audits complement formal technical reviews by assessing characteristics of configuration objects that are not normally considered in the review process.

5. Status report

Write a configuration status report to answer the following questions:

正 : What happened?

正 : Who did it?

正 : When did this happen?

■ What other things will it affect?

 

Capability maturity

Capability Maturity Model overview

The Capability Maturity Model (CMM), established by the Software Engineering Institute of Carnegie Mellon University in the late 1980s with the support of the U.S. Department of Defense, is used to evaluate the capability maturity of software processes in software organizations.

Definition of CMM: A CMM is a description of the various stages of development of a software organization in its practice of defining, implementing, measuring, controlling, and improving its software processes.

A variety of CMM based models constitute a CMM family:

  • Sw-cmm: Maturity model for software processes.
  • P-cmm: People capability Maturity Model.
  • Sa-cmm: Software acquisition maturity model.
  • Ipd-cmm: Integrated systems product development capability Maturity Model.
  • Se-cmm: Systems Engineering capability Maturity Model.
  • Sse-cmm: System security Engineering capability Maturity Model.

The basic idea of the CMM is that because problems are caused by the way we manage software processes, the adoption of new software technologies does not automatically improve software productivity and quality. CMM helps software development organizations establish a regular and mature software process.

Software process improvement is an incremental process based on the completion of one small improvement step after another, rather than a complete revolution. The CMM divides the evolution of software processes from disorder to order into five stages, and ranks these stages to form five ascending levels.

 

CMM

The CMM consists of the following components:

  • Maturity Levels;
  • Process Capability;
  • Key Process Areas (KPA);
  • Objectives (Goals);
  • Common Features;
  • Key Practices.

 

 

CMM uses:

Software process assessment, using THE CMM to analyze the current state of the software process in the software organization and identify its strengths and weaknesses

Software process improvement, which is used by software development organizations to improve the process of developing and maintaining software. Based on the evaluation results, software process improvement strategies are developed from low level to higher level.

Software capability assessment, used by a government or commercial enterprise to evaluate the risk of contracting a software project with a particular software company.

1. The initial stage

Software processes are characterized by disorder and sometimes chaos. Few processes are defined (that is, there is no stereotyped process model), and the success of the project depends entirely on the individual capabilities of the developer.

Software organizations at this lowest level of maturity basically do not have a sound software engineering management system, and their software process completely depends on the staffing of the project team, so it is unpredictable and the process will change as the personnel change.

2. Repeatable level

Software organizations establish basic project management processes (process models) that track cost, schedule, functionality, and quality. The necessary process specifications have been established, and the planning and management process of the new project is based on the practical experience of previous similar projects, so that software projects with similar application experience can be successful again.

The process capability of a software organization at Level 2 maturity can be summarized as that the planning and tracking of software projects is stable and has provided a project environment for a disciplined management process that can repeat previous successful practices. Software project engineering activities are under the effective control of the project management system, implementing realistic plans based on the criteria of previous projects.

3. Defined level

Software organizations have defined complete software processes (process models), and software processes have been documented and standardized. All project teams use documented, approved processes to develop and maintain software. This level contains all the characteristics of level 2.

The process capability of a software organization at level 3 maturity can be summarized as that both management and engineering activities are stable. The cost and schedule of software development as well as the functionality and quality of products are controlled, and the quality of software products is traceable. This capability is based on a common understanding of the activities, people, and responsibilities of the defined process model within the software organization.

4. Managed level

The software organization has established quantitative quality objectives for both software processes (process models and process instances) and software products, and significant process activities for all projects are measurable. The software organization collects and applies the methods of process measurement and product measurement, which can understand and control the software process and product quantitatively, and lays a foundation for evaluating the process quality and product quality of the project. This level contains all the characteristics of level 3.

The process capability of a software organization at level 4 maturity can be summarized as that the software process is measurable and operates within a measurable range. This level of process capability allows software organizations to predict process and product quality trends within a quantitative framework, to take timely action to correct deviations when they occur, and to expect software products to be of high quality.

5. The optimizing level

Software organizations focus on the continuous improvement of software processes. The software organization at this level is an organization whose goal is to prevent defects by identifying weaknesses in software process elements and having sufficient means to improve them. In such an organization, statistical data on the effectiveness of software processes can be obtained, which can be used for cost/benefit analysis of new technologies and to optimize the best new technologies that can be adopted in software engineering practice. This level contains all the characteristics of level 4.

The process capability of a software organization at level 5 maturity can be summarized as that the software process is optimizable. Software organizations at this level can continuously improve their process capabilities, both by constantly improving and optimizing existing process instances and by means of new technologies and new methods adopted to realize future process improvement.

 

The above are the main tasks and evaluation methods of the seven stages of software project management. We hope that we can make corrections in time if there are shortcomings.

 

I think so. Yeah.Thumb up attentionBig bad Wolf!

 

At the same time, those who like Python can also follow my wechat public account “Gray Wolf Hole master” back to “Python Notes” to obtain ****Python from the beginning to master learning notes and Python common functions and methods quick reference manual!

 

Big bad Wolf is looking forward to progress with you!