• Ei tuloksia

Business Area CT Team

Implementation Review Round 3

Interviewee HK (Product Owner), MR (Program Manager)

Date 26-04-2020 high-lighting the current status, risks and imped-iments has not been demonstrated.

It may not be possible to identify significant delay in deliveries and take timely corrective actions.

2 MC Retrospectives are conducted on a regular frequency, however there is no evidence of focused action items towards process improvements.

Improvement in process perfor-mance may not be ensured as

This is a best practice.

4 PR There is no mechanism to verify coverage of test scripts to product functionality.

It may not be ensured that solu-tion meets customer expecta-tions. into tasks with its corresponding effort.

Remaining work to be completed at any point of time is not known.

Also, significant delays are not identified early for appropriate actions.

7 PLAN Confluence is extensively used to store vital information such as integration testing process, Dev, Integration and Release environment details, master links to all processes.

This is a best practice. It ensures that all necessary information is readily made available to all stake holders for reference and usage.

84 Appendix 2. Findings of IR3 of Case Project 2

Business Area AP Team

Implementation Review Round 3

Interviewee HR (Product Owner), MR (Program Manager)

Date 26-04-2020 solution is built for Customers

2 MC Impediments are not formally captured and tracked to closure.

It cannot be ensured that all impedi-ments are effectively tracked to clo-sure

3 RDM Requirements are available at very high level in Confluence. Detailed require-ments specification is not evidenced.

It cannot be ensured that there is a deeper mutual understanding of requirements which may lead to consistency between requirements and solution which may lead to cus-tomer dissatisfaction.

5 TS Technical specification review com-ments are not captured formally alt-hough Confluence page has undergone changes.

It cannot be ensured that all review comments are tracked to closure.

6 PR There is no evidence of test case review for coverage and completeness.

It cannot be ensured that the solution meets customer requirements.

8 PLAN User stories are not further broken down into tasks with its corresponding effort.

Remaining work to be completed at any point of time is not known. Also, significant delays are not identified early for appropriate actions.

9 MPM There is no evidence of inference drawn It may not be ensured that

perfor-85

from KPI dashboard to identify areas of improvement.

mance objectives are met

10 RSK There is no evidence of structured risk handling, although few risks are identi-fied during Quarter planning and weekly progress meetings.

This may not ensure appropriate handling of risks which may in turn impact the releases.

11 RDM Backlog grooming sessions are pre-scheduled and conducted fortnightly.

This is a best practice. It ensures that all stakeholders have a common and deeper understanding of re-quirements.

86 Appendix 3. Findings of IR3 of Case Project 3

Business Area KCE Group Team, Finland & India Implementation Review Round 3

Interviewee JG (Product Owner), JM (Program Manager)

Date 27-04-2020

Team Specific Findings:

SI# Practice Area

Description Impact

1 EST Story point estimation does not follow Fibonacci series which is defined as common ways of working.

It cannot be ensured that size of the story is estimated considering the com-plexities involved.

2 RDM There is no evidence of linkage be-tween user story in Version One to its corresponding requirements specifi-cation and technical specifispecifi-cation.

It cannot be ensured that there is con-sistency between the requirements and solution which may lead to customer dissatisfaction.

3 RDM Requirement review comments are not formally captured and tracked although there are discussions on the same.

It cannot be ensured that all review comments are tracked to closure.

4 RDM Clarification /queries raised on re-quirements are not formally captured and responded.

It cannot be ensured that all clarifica-tions have been responded and agreed upon.

5 MPM There is no evidence of inference drawn from KPI dashboard to identify areas of improvement.

It may not be ensured that performance objectives are met

6 VV Root cause analysis is performed, however analysis for any patterns in root cause and corrective actions are not evidenced.

It cannot be ensured that process effec-tiveness improves over a period of time.

7 PLAN In some instances, User stories are not further broken down into tasks with its corresponding effort.

Remaining work to be completed at any point of time is not known. Also signifi-cant delays are not identified early for appropriate actions.

8 EST Effort estimation is not done for few tasks.

It may not be possible to identify signifi-cant delay in deliveries and take timely corrective actions.

87 9 PLAN

There is no evidence of assessment of team members knowledge and skills to identify training needs.

It may not ensure efficient and effective use of personnel resources.

10 PLAN

Presentation is used during Sprint planning ceremony which lists the priorities, details of the features, de-fects and also by which release they need to be completed

This is a best practice. It provides more clarity to scrum team on the goals to be achieved for the sprint and keeps the team committed.

11 PLAN Developer demo meetings are con-ducted as knowledge sharing ses-sions within the team

This is a best practice. It helps to upskill the knowledge level of team members.

12 DAR Criteria for evaluation of alternate approaches/solutions and the select-ed outcome of critical decisions are not demonstrated.

Probability of selecting an optimal solu-tion may not be ensured for critical de-cisions when applicable.

13 CAR

Actions derived from root cause anal-ysis that are proven effective are not established as process improvements

Benefits resulting from the implemented actions may not be leveraged to im-prove ways of working.

14 CM

Version control systems used to store various artifacts is not mentioned for ease of retrieval.

It may not be ensured that right version of information is available when needed.

88 Appendix 4. Common SW Findings of IR3 Business Area All teams Implementation Review Round 3

Findings:

Consistency between requirements and solution cannot be ensured which may lead to customer dissatisfaction

2 RDM

In few teams, acceptance criteria is not captured for few stories

Teams may not be aligned with the expected outcome.

3 RDM In some teams, there is no clear un-derstanding of Acceptance criteria, DoD and are interchangeably used.

Teams may not be aligned with the expected outcome

4 RDM In some teams, details of user story and/or link to Requirements specifica-tion are not provided in VersionOne

Requirement corresponding to the user story may not be clearly identi-fied

5 RDM In few teams, usage of DoR needs to be strengthened.

Execution of stories where DoR is not met may lead to rework during later

It cannot be ensured that defects are identified early in the life cycle to avoid rework at a later stage

7 TS

In some teams, criteria based on which a technical decision has been made from available alternatives is not cap-tured.

It cannot be ensured that the most effective solution is selected within cost, schedule and performance con-straints.

8 VV In most teams, root cause analysis is performed on integration and RelLab testing defects, however the root causes are not analyzed for improve-ment actions.

Proactive actions to prevent recur-rence of defects may not be arrived at.

9 VV In some teams, various types of testing which the user stories need to undergo is not explicitly specified.

Proactive actions to prevent recur-rence of defects may not be arrived

consistent-89

ly followed 11 PR In most teams, peer review comments

are not analyzed for improvement ac-tions.

Proactive actions to prevent recur-rence of review comments may not be arrived at.

12 PR In some teams, test case review com-ments are not captured and tracked to closure

Sufficiency of test cases cannot be ensured which may lead to rework during later stages of life cycle.

13 EST In few teams, story point estimation does not follow the defined Ways of Working – Fibonacci series

Size of the solution may not consider during story point estimation

14 EST In few teams, effort estimation is not performed for tasks

Significant delay in deliveries may not be identified for timely corrective ac-tions

15 PLAN In most teams, Program / team charter with approach for how the work is to be performed and managed is not identified impediments are actioned upon

17 MC In most teams, significant deviations from plan are not monitored for appro-priate corrective actions

It cannot be ensured that appropriate actions are taken when there is a deviation.

18 RSK In some teams, although risks are captured at Program level, identifica-tion of risks at team level need to be strengthened.

Proactive measures may not be taken at the right time to mitigate the risks

19 RSK In most teams, risks identified are not revisited periodically for any change in risk parameters.

Current status of risks are not known to identify focus areas and initiate appropriate actions

20 CAR In few teams, root cause analysis for field issues to identify improvement actions is not evidenced.

It may not be possible to prevent

Benefits resulting from the imple-mented actions may not be leveraged to improve ways of working.

22 DAR In some teams, criteria for evaluation of alternate approaches/solutions and the selected outcome of critical deci-sions are not demonstrated

Probability of selecting an optimal solution may not be ensured for criti-cal decisions when applicable

23 CM In most teams, version control systems used to store various artifacts is not

Location where the various artifacts are stored may not be known

90 mentioned for ease of retrieval.

24 CM Retrieval of backup to ensure that right version of the file is restored is not evidenced

It may not be ensured that right ver-sion of information is restored when needed

25 GOV In few teams, Product Owners are not aware of the mode of capturing dis-cussion points out of periodic meet-ings.

It cannot be ensured that action items are effectively tracked to closure

26 II Mechanism to contribute lessons learnt, best practices and process improvements back to the Organiza-tion is not evidenced

Potential benefits by adapting to new ways of working cannot be leveraged to all teams

27 PQA Product Quality reviews and usage of Quality gates like DoR, DoD need to be strengthened

It may not be ensured that product meets customer requirements

28 MPM In some teams, there is low awareness on interpretation of KPI outcomes in line with team activities.

It may not be ensured that perfor-mance objectives are met.

29 MPM In most teams, inference is not drawn from KPI dashboard to identify action items for improvement.

It may not be ensured that perfor-mance objectives are met.

91 Appendix 5. Summarised Findings of IR3 Business Area All teams Implementation Review Rouna 3

Findings: captured for few stories.

In some teams, there is no clear under-standing of Acceptance criteria, DoD and are interchangeably used.

In some teams, details of user story and/or link to Requirements specification are not provided in VersionOne.

In few teams, usage of DoR needs to be strengthened.

Consistency between requirements and solution cannot be ensured which may

Requirement corresponding to the user story may not be clearly identified

Execution of stories where DoR is not met may lead to rework during later stages TS In few teams, Software specification and

design review comments are not formally captured and tracked to closure.

In some teams, criteria based on which a technical decision has been made from available alternatives is not captured.

It cannot be ensured that defects are identified early in the life cycle to avoid rework at a later stage

It cannot be ensured that the most effec-tive solution is selected within cost, schedule and performance constraints.

PI In few teams, there is low awareness on integration strategy for build process among team members.

It cannot be ensured that right environ-ment is available when needed and inte-gration strategy is consistently followed VV In most teams, root cause analysis is

per-formed on integration and RelLab testing defects, however the root causes are not analyzed for improvement actions.

Proactive actions to prevent recurrence of defects may not be arrived at.

92

PR In most teams, peer review comments are not analyzed for improvement actions.

In some teams, test case review com-ments are not captured and tracked to closure.

Proactive actions to prevent recurrence of review comments may not be arrived at.

Sufficiency of test cases cannot be en-sured which may lead to rework during later stages of life cycle.

EST In few teams, story point estimation does consid-ered during story point estimation

Significant delay in deliveries may not be identified for timely corrective actions PLAN In most teams, Program / team charter

with approach for how the work is to be performed and managed is not defined.

Alignment of stakeholders to the approach and commitment to plan cannot be en-sured.

MC In some teams, impediments faced by the teams are not formally captured and tracked to closure.

In most teams, significant deviations from plan are not monitored for appropriate corrective actions

It may not be possible to ensure that iden-tified impediments are actioned upon It cannot be ensured that appropriate revisited periodically for any change in risk parameters.

Proactive measures may not be taken at the right time to mitigate the risks

Current status of risks are not known to identify focus areas and initiate appropri-ate actions

OT In few teams, periodic review with Senior management on the progress, risks and challenges is not evidenced.

In few teams, monitoring of KPIs on a defined frequency for improvement actions is not evidenced

Timely support from senior management cannot be ensured.

It cannot be ensured that appropriate actions are identified to improve effective-ness of learning and development activi-ties over a period of time.

CAR In few teams, root cause analysis for field It may not be possible to prevent repeated

93 issues to identify improvement actions is not evidenced.

In most teams, actions derived from root cause analysis that are proven effective are not established as process improve-ments

occurrence of the same negative out-comes.

Benefits resulting from the implemented actions may not be leveraged to improve ways of working.

DAR In some teams, criteria for evaluation of alternate approaches/solutions and the selected outcome of critical decisions are not demonstrated

Probability of selecting an optimal solution may not be ensured for critical decisions when applicable

CM In most teams, version control systems used to store various artifacts is not men-tioned for ease of retrieval.

Retrieval of backup to ensure that right version of the file is restored is not evi-denced

Location where the various artifacts are stored may not be known

It may not be ensured that right version of information is restored when needed GOV In few teams, Product Owners are not

aware of the mode of capturing discussion points out of periodic meetings.

It cannot be ensured that action items are effectively tracked to closure

II Mechanism to contribute lessons learnt, best practices and process improvements back to the Organization is not evidenced

Potential benefits by adapting to new

It may not be ensured that product meets customer requirements

PCM Evaluation of process improvement effec-tiveness on North Star and team level KPIs is not demonstrated.

Piloting / deployment and communication of software improvement actions that in-volve radical change need to be strength-ened.

Impact of process improvement on KTI Drivers may not be known.

Vital inputs from pilot results that can be used to fine tune deployment plan may not be available for use.

MPM In some teams, there is low awareness on interpretation of KPI outcomes in line with team activities.

In most teams, inference is not drawn from KPI dashboard to identify action items for

It may not be ensured that performance objectives are met.

It may not be ensured that performance objectives are met.

94 improvement.

SAM In most instances, technical reviews of supplier deliverables are not evidenced.

In some instances, monitoring of supplier processes is not evidenced.

It may not be ensured that supplier deliv-erables are of right quality

There may not be visibility to supplier capability and performance to minimize risks.

95

Appendix 6. Gap Analysis of Current State Analysis After IR3

96

Appendix 7. Summarised Findings of Final Appraisal Business Area All teams

Final Review Date 10-11-2020

Findings:

TS •Tools such as Confluence, Jen-kins, git, VersionOne, Jama used as part of ways of working in engineering lifecycle model across project teams

•Jama tool extensively used by the team with references to class diagram and basic sequence diagram

•Tools such as Coverity used for static analysis across project teams

None iden-tified

•Tools such as McCabe complexity tool could be used for measuring code main-tainability index and explore possibilities for minimizing code refactoring efforts

•Code health dashboard could be devel-oped and used for analyzing the overall health of the code covering key metrics such as code coverage, code cyclomatic complexity, maintainability index, code debt, Depth of inheritance and this could be integrated with KPI Dashboard

•Select technical artifacts capturing de-tails such as scenarios, hardware I/O, electrical drawing availability, interface components could be templatized to ensure uniformity

PI Project teams use Jenkins to evolve and implement Integra-tion test strategy

None iden-tified

•Smoke test failures along with resolu-tions database could be integrated with the pertinent tool to instantly access the resolutions of relevant test failures to select and apply the solutions with less turn-around time

•Shift-left method could be applied for smoke test by applying the same based on applicable scope at the component level to reduce smoke test failures dur-ing release testdur-ing

•Automation of release notes

prepara-97

tion could be explored

RDM •INVEST model used to capture user story details comprehen-sively across project teams

•Project teams use Confluence to capture Feature specifications

•Granular level requirements could also be assigned with requirements ID explic-itly

•Methods such as Kano, MoSCoW could be used for requirements elicitation and prioritization

•Requirements elicitation question bank could be formulated and used for attain-ing requirements clarity

Criteria for selecting review methods such as peer review, SME review could be defined and used

VV •Robot framework test automa-tion used for component, inte-gration & release testing with the capacity of executing 2000 test

•In-house validation performed

•In-house validation performed