Saturday, February 25, 2023

ADAPTIVE ENTERPRISES INTERWEAVING THE DELIBERATE AND EMERGENT - CONCEPTS AND MODELS

 ADAPTIVE ENTERPRISES INTERWEAVING THE DELIBERATE AND EMERGENT - CONCEPTS AND MODELS Gabrielle Mary Peko A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems THE UNIVERSITY OF AUCKLAND 2017 i Abstract The global business world is becoming increasingly complex and is characterised by rapid and unpredictable change. This unpredictability means that enterprises are being challenged at all levels. Customers, employees, partners, investors and society are all sources of uncertainty resulting in the need for enterprises to be adaptive. Traditional deliberate strategies based on cycles of stability and predictability are no longer relevant for today’s business environments. Emergent strategies have been proposed by many as the answer. However, the thesis of this research is that enterprises need to interweave the deliberate with the emergent in terms of strategy, organisational structures, business processes, and information systems to be truly adaptive. A review of current research literature suggests that the adaptive theme in the context of enterprise success is well supported by a mature body of research that spans a number of disciplines, namely management, operations management and information systems, resulting in the advancement of a diverse array of perspectives. Specifically, management researchers pioneered the deliberate and emergent concepts in the context of strategy and organisational management whereas the focus of operations’ research is on the management of business processes to achieve enterprise success. Similarly, there is a plethora of information systems literature on adaptive systems and the like. However, despite this multi-disciplinary body of knowledge there is a significant lacuna in terms of concepts, models and hypothesis that intrinsically, fundamentally, and seamlessly weave the deliberate and emergent aspects to support an adaptive enterprise. Consequently, there is a need for both theoretical insights and practical applications that integrate the management, operations and information perspectives to enable the transform from enterprises with deliberate or emergent orientations to enterprises that are adaptive. This thesis explores and proposes how the interweaving of the deliberate with the emergent to be adaptive could be conceived and realised in terms of strategy, organisation, process, and information. A multi-methodological approach made up of observation, theory building, and validation through a Delphi study and survey was adopted to accomplish the research objectives. The research is inter-disciplinary in nature and spans management, operations management, and information systems. Because of this an exploratory approach was used, which consisted of a comprehensive literature review followed by a multi-round Delphi study of industry and academic experts. The Delphi resulted in a number of key research artefacts being: structural, behavioral and transformational concepts, models and hypotheses. This exploratory study was followed by an explanatory study that attempted to further validate and refine some of these key artefacts through a survey. The analysis showed that some of the models and hypothesis were strongly supported while others were partially supported. Furthermore, the concepts and models proposed in the research were evaluated, validated, and refined through expert feedback, empirical testing, and disseminated through presentations and publications in journals, book chapters, and conference proceedings. There are some limitations to the study. First, the scope of the PhD research prevented the implementation of the models in a real world context through a field study. Second, the transformation cycle, being a dynamic systems model, could be further validated through a system dynamics implementation. ii Acknowledgements I dedicate this PhD to Mum, Dad, and family. My heartfelt thanks to my supervisor Professor David Sundaram for his expert guidance, enduring support, inspiration and friendship. I would also like to thank my supervisory and advisory teams made up of Professor Michael Myers, Dr. James Dong, Dr. Hendrik Reefke, Dr. Laszlo Sajtos and Associate Professor Tiru Arthanari for their specialist advice on qualitative and quantitative research. Special credit goes to my dear friend Jenny Henshaw for the countless hours she spent proof reading various versions of my PhD. I would also like to acknowledge my thanks and debt to the University of Auckland, the Department of Information Systems and Operations Management and in particular, Professors Michael Myers and Tava Olsen for making it possible for me to pursue doctoral studies while working full time. Finally, I would like to thank the many individuals who participated in this research. Sincere thanks to the Delphi experts and pilot testers who gave so readily of their time and knowledge. Special thanks to all my friends and colleagues for their interest, support, encouragement and understanding. Overview of Contents iii Overview of Contents Abstract.................................................................................................................................i Acknowledgements................................................................................................................ii Overview of Contents...........................................................................................................iii Table of Contents.................................................................................................................. v List of Figures.................................................................................................................... xiii List of Tables..................................................................................................................... xvi Abbreviations..................................................................................................................... xix 1 Introduction ................................................................................................................... 1 Practical Issues ....................................................................................................... 4 Research Questions ................................................................................................. 5 Research Problems.................................................................................................. 6 Research Objectives ................................................................................................ 9 Research Approach ................................................................................................. 9 Research Contributions..........................................................................................10 Structure of the Thesis...........................................................................................11 2 Literature Review..........................................................................................................14 Framework and Models..........................................................................................14 Strategy .................................................................................................................17 Business Processes..................................................................................................22 Organisational Structures.......................................................................................27 Information Systems ..............................................................................................29 Transformation Models..........................................................................................38 Practical Issues, Research Problems and Requirements..........................................43 Chapter Summary..................................................................................................49 3 Research Methodology ..................................................................................................52 Research Questions ................................................................................................53 Research Objectives ...............................................................................................54 Research Philosophy ..............................................................................................55 Multi-methodological Research Frameworks ..........................................................60 The Research Cycles ..............................................................................................65 Refined Research Artefacts ....................................................................................70 Summary................................................................................................................70 4 Exploratory Delphi........................................................................................................71 Expert Selection.....................................................................................................73 Round One: Identification of AE Factors and Models............................................77 Round Two: Rating of AE Factors and Models......................................................94 Overview of Contents iv Round Three: Refinement of the AE Key Themes and Factors............................122 5 Explanatory Survey.....................................................................................................133 Conceptual Foundations.......................................................................................134 Survey Design ......................................................................................................138 Survey Administration .........................................................................................153 Analysis of Results...............................................................................................157 Factor Analysis....................................................................................................162 Confirmatory Factor Analysis..............................................................................175 Hypothesis Testing Using Structural Equation Modelling ....................................186 SEM Adaptive Zone Measurement Model and Behavioural Flows Measurement Model Stratified Data...........................................................................................193 Hypotheses Testing Procedure .............................................................................196 Summary..............................................................................................................202 6 Discussion of the Research Findings............................................................................204 6.1 Adaptive Enterprise Models.................................................................................205 6.2 Key Components Model .......................................................................................208 6.3 The Structural Elements Model ...........................................................................212 6.4 The Behavioural Flows Model..............................................................................214 6.5 The Adaptive Zone Model....................................................................................216 6.6 The Adaptive Enterprise Transformation Cycle ...................................................217 6.7 Enablers, Disablers and Characteristics of AE .....................................................219 Further Refinement and Validation of Concepts and Models ...............................229 Additional Thoughts and Perspectives.................................................................243 Summary..............................................................................................................244 7 Conclusion ..................................................................................................................245 Overview of the Research.....................................................................................246 Key Research Artefacts........................................................................................248 Fulfilled Research Requirements ..........................................................................252 Research Contributions........................................................................................256 Research Limitations............................................................................................261 Future Research...................................................................................................263 Concluding Remarks ............................................................................................264 Appendix A AE Delphi Study Round One……………………………………………………………....267 Appenxix B AE Delphi Study Round Two………………………………………………………………327 Appenxix C AE Delphi Study Round Three…………………………………………………………….358 Appenxix D Survey Questionnaire……………………………………………………………................376 Appenxix E Survey Factor Analysis…………………………………………………………….............419 References…………………………………………………………….........................................................518 Table of Contents v Table of Contents Abstract.................................................................................................................................i Acknowledgements................................................................................................................ii Overview of Contents...........................................................................................................iii Table of Contents.................................................................................................................. v List of Figures.................................................................................................................... xiii List of Tables..................................................................................................................... xvi Abbreviations..................................................................................................................... xix 1 Introduction ................................................................................................................... 1 Practical Issues ....................................................................................................... 4 Research Questions ................................................................................................. 5 Research Problems.................................................................................................. 6 Research Objectives ................................................................................................ 9 Research Approach ................................................................................................. 9 Research Contributions..........................................................................................10 Structure of the Thesis...........................................................................................11 2 Literature Review..........................................................................................................14 Framework and Models..........................................................................................14 Strategy .................................................................................................................17 Deliberate Strategy.................................................................................................. 18 Emergent Strategy .................................................................................................. 20 Adaptive Strategy ................................................................................................... 21 Business Processes..................................................................................................22 Deliberate ................................................................................................................ 22 Emergent................................................................................................................. 24 Adaptive .................................................................................................................. 24 Organisational Structures.......................................................................................27 Deliberate ................................................................................................................ 28 Emergent................................................................................................................. 28 Adaptive .................................................................................................................. 28 Information Systems ..............................................................................................29 Deliberate Information Systems.............................................................................. 30 Emergent Information Systems............................................................................... 32 Table of Contents vi Adaptive Information Systems................................................................................ 34 Transformation Models..........................................................................................38 Deliberate Transformation Models.......................................................................... 38 Emergent Transformation Models........................................................................... 39 Adaptive Transformation Models ........................................................................... 41 Practical Issues, Research Problems and Requirements..........................................43 Practical Issues and Research Problems................................................................. 44 2.7.1.1 Structure Related Issues and Problems .................................................................... 45 2.7.1.2 Behaviour Related Issues and Problems ................................................................... 46 2.7.1.3 System Perspective Related Issues and Problems..................................................... 46 2.7.1.4 Transformation Related Issues and Problems........................................................... 46 Key Research Requirements.................................................................................... 47 Summary of Practical Issues, Research Problems and Requirements .................... 49 Chapter Summary..................................................................................................49 3 Research Methodology ..................................................................................................52 Research Questions ................................................................................................53 Research Objectives ...............................................................................................54 Research Philosophy ..............................................................................................55 Qualitative and Quantitative Research Methods.................................................... 55 Multi-Methodological Research............................................................................... 57 Interpretive Research in Information Systems........................................................ 58 Multi-methodological Research Frameworks ..........................................................60 Adapted Research Framework ................................................................................ 63 Application of Research Philosophy and Framework ............................................. 64 The Research Cycles ..............................................................................................65 Literature Review.................................................................................................... 66 Delphi Study ........................................................................................................... 67 Survey...................................................................................................................... 69 Refined Research Artefacts ....................................................................................70 Summary................................................................................................................70 4 Exploratory Delphi........................................................................................................71 Expert Selection.....................................................................................................73 Panel Characteristics............................................................................................... 73 Size of Expert Panel................................................................................................ 74 Expert Inclusion Criteria......................................................................................... 75 Table of Contents vii Invitation and Selection .......................................................................................... 76 Round One: Identification of AE Factors and Models............................................77 Design and Development of Questionnaire for Round One .................................... 78 4.2.1.1 Pilot Test Questionnaire ........................................................................................... 78 4.2.1.2 Final Design for Round One ..................................................................................... 80 Administration of Round One................................................................................. 82 Distribution of Responses........................................................................................ 82 4.2.3.1 Response Rates ......................................................................................................... 83 4.2.3.2 Background Information of Experts.......................................................................... 83 Analysis of Responses.............................................................................................. 85 4.2.4.1 Outline of the Analysis Process ................................................................................ 86 4.2.4.2 Analysis of Delphi Responses Round One................................................................. 88 Summary of Delphi Round One .............................................................................. 93 Round Two: Rating of AE Factors and Models......................................................94 Design of Questionnaire for Round Two................................................................. 95 4.3.1.1 Development of Rating Scales................................................................................... 95 4.3.1.2 Pilot testing of Round Two ...................................................................................... 97 4.3.1.3 Final Design of Round Two ...................................................................................... 98 Distribution of responses........................................................................................101 4.3.2.1 Response Rates ........................................................................................................101 Analysis of the Responses ......................................................................................103 4.3.3.1 Round Two analysis process ....................................................................................103 4.3.3.2 Round Two analysis process – Collection and Ranking...........................................104 4.3.3.3 Determining Consensus............................................................................................105 4.3.3.4 Delphi Round Two – Consensus and Findings ........................................................106 Validation and Refinement of Models....................................................................109 4.3.4.1 Validation of the Key Components Models .............................................................109 4.3.4.2 Quantitative Validation of the Structural Elements Model.....................................110 4.3.4.3 Qualitative Validation of the Structural Elements Model.......................................110 4.3.4.4 Refinement of the Structural Elements Model.........................................................112 4.3.4.5 Quantitative Validation of the Behavioural Flows Model .......................................112 4.3.4.6 Qualitative Validation of the Behavioural Flows Model .........................................113 4.3.4.7 Refinement of the Behavioural Flows Model ...........................................................115 4.3.4.8 Quantitative Validation of the Adaptive Zone Model .............................................115 4.3.4.9 Qualitative Validation of the Adaptive Zone Flows Model .....................................117 4.3.4.10 Refinement of the Adaptive Zone Model .................................................................117 Table of Contents viii 4.3.4.11 Quantitative Validation of the Adaptive Enterprise Transformation Cycle ...........118 4.3.4.12 Qualitative Validation of the Adaptive Enterprise Transformation Cycle ..............119 4.3.4.13 Refinement of the Adaptive Enterprise Transformation Cycle................................121 Summary of Delphi Round Two ............................................................................121 Round Three: Refinement of the AE Key Themes and Factors............................122 Design of Questionnaire for Round Three .............................................................123 4.4.1.1 Selection of Factors for Rerating .............................................................................123 4.4.1.2 Providing Feedback to the Panellists ......................................................................124 4.4.1.3 Final design for Round Three ..................................................................................125 Distribution of responses........................................................................................126 4.4.2.1 Responses Rates.......................................................................................................127 Analysis of the Responses ......................................................................................127 4.4.3.1 Termination of the Delphi Study.............................................................................130 Summary of Round Three......................................................................................131 Summary of the Delphi Study ...............................................................................132 5 Explanatory Survey.....................................................................................................133 Conceptual Foundations.......................................................................................134 Hypothesis Models and Associated Hypotheses.....................................................135 Formulated Hypotheses..........................................................................................137 Survey Design ......................................................................................................138 Survey Objectives...................................................................................................139 Design Features......................................................................................................139 Survey Question Design .........................................................................................142 Question Content...................................................................................................144 Multi-dimensional Measurement Scales .................................................................145 Question Scales.......................................................................................................148 Pilot Testing...........................................................................................................149 5.2.7.1 Pilot Test Survey Experts........................................................................................149 5.2.7.2 Pilot Test with Potential Respondents....................................................................150 Final Design of the AE Survey ..............................................................................152 Survey Administration .........................................................................................153 Selection of Respondents........................................................................................154 Online AE Survey Administration.........................................................................154 Sample Demographics ............................................................................................155 Analysis of Results...............................................................................................157 Assessment of the Survey Data..............................................................................158 Table of Contents ix Missing data and Outliers......................................................................................158 Sample Size.............................................................................................................160 Normality Assessment............................................................................................161 Correlations............................................................................................................162 Factor Analysis....................................................................................................162 Measurement Model Assessment............................................................................163 Exploratory Factor Analysis..................................................................................164 5.5.2.1 The EFA Process.....................................................................................................165 5.5.2.2 EFA Assessment Adaptive Zone Measurement Model ............................................166 5.5.2.3 Final EFA Adaptive Zone Measurement Model ......................................................169 5.5.2.4 EFA Assessment Behavioural Flows Measurement Model ......................................171 5.5.2.5 Final EFA Behavioural Flows Measurement Model ................................................172 5.5.2.6 Summary of the EFA Results..................................................................................174 Confirmatory Factor Analysis..............................................................................175 Develop Theory and Measurement Model .............................................................175 5.6.1.1 CFA Adaptive Zone Measurement Model ...............................................................176 5.6.1.2 Specify Statistic Package Parameters......................................................................177 5.6.1.3 Compute and Examine Model Fit Indices ...............................................................177 5.6.1.4 Report Model Fit Indices & Parameter Estimates ..................................................178 5.6.1.5 Determine Model validity ........................................................................................178 5.6.1.6 Internal Consistency Reliability...............................................................................179 5.6.1.6.1 Convergent Validity .................................................................................................. 179 5.6.1.6.2 Discriminant Validity ................................................................................................ 180 5.6.1.6.3 Nomological Validity ................................................................................................. 180 5.6.1.7 CFA Behavioural Flows Measurement Model .........................................................181 5.6.1.8 Compute and Examine Model Fit Indices ...............................................................182 5.6.1.9 Report Model Fit Indices & Parameter Estimates ..................................................183 5.6.1.9.1 Determine Model validity.......................................................................................... 183 5.6.1.9.2 Internal Consistency Reliability ................................................................................ 184 5.6.1.9.3 Convergent Validity .................................................................................................. 184 5.6.1.9.4 Discriminant Validity ................................................................................................ 184 5.6.1.9.5 Nomological Validity ................................................................................................. 185 Summary of the CFA Results................................................................................186 Hypothesis Testing Using Structural Equation Modelling ....................................186 SEM........................................................................................................................187 5.7.1.1 SEM Adaptive Zone Measurement Model ...............................................................188 5.7.1.2 SEM Adaptive Zone Measurement Model Mediation Effects..................................189 Table of Contents x SEM Behavioural Flows Measurement Model Specification..................................191 SEM Adaptive Zone Measurement Model and Behavioural Flows Measurement Model Stratified Data ..........................................................................................193 Parameter Estimates..............................................................................................194 Summary of the SEM Results................................................................................196 Hypotheses Testing Procedure .............................................................................196 Hypotheses Tests: Adaptive Zone Hypotheses Model H1 to H5............................197 Hypotheses Tests: Behavioural Flows Hypotheses Model H6 to H9......................200 Summary..............................................................................................................202 6 Discussion of the Research Findings............................................................................204 6.1 Adaptive Enterprise Models.................................................................................205 6.2 Key Components Model .......................................................................................208 6.3 The Structural Elements Model ...........................................................................212 6.4 The Behavioural Flows Model..............................................................................214 6.5 The Adaptive Zone Model....................................................................................216 6.6 The Adaptive Enterprise Transformation Cycle ...................................................217 6.7 Enablers, Disablers and Characteristics of AE .....................................................219 Enablers of an AE..................................................................................................220 Disablers of an AE .................................................................................................225 Characteristics of an AE ........................................................................................226 Further Refinement and Validation of Concepts and Models ...............................229 Further Refinement and Validation of the Adaptive Zone Model.........................230 6.8.1.1 The Hypothesis Test of the Adaptive Zone Model ..................................................231 6.8.1.2 The EFA ..................................................................................................................231 6.8.1.3 The CFA..................................................................................................................232 6.8.1.4 The SEM..................................................................................................................232 6.8.1.5 Results for the Adaptive Zone Model Hypothesis Test ...........................................233 6.8.1.6 Three Integrated Key Components..........................................................................234 6.8.1.7 Information Systems Component.............................................................................234 6.8.1.8 Intrinsic and Extrinsic Adaptive Behaviours...........................................................236 Further Refinement and Validation of the Behavioural Flows Model ..................237 6.8.2.1 The Hypothesis Test of the Behavioural Flows Model ............................................237 6.8.2.2 The EFA ..................................................................................................................238 6.8.2.3 The CFA..................................................................................................................239 6.8.2.4 The SEM..................................................................................................................239 6.8.2.5 Results for the Behavioural Flows Model Hypothesis Test .....................................240 Table of Contents xi 6.8.2.6 Integrated Behavioural Flows..................................................................................240 6.8.2.7 Analysis and Decision Latency.................................................................................241 6.8.2.8 Multi-Dimensional Intrinsic Adaptive Behaviours...................................................242 Additional Thoughts and Perspectives.................................................................243 Summary..............................................................................................................244 7 Conclusion ..................................................................................................................245 Overview of the Research.....................................................................................246 Key Research Artefacts........................................................................................248 Key Components Model .........................................................................................249 Structural Elements Model ....................................................................................249 Behavioural Flows Model.......................................................................................250 Adaptive Zone Model .............................................................................................250 Adaptive Enterprise Transformation Cycle ...........................................................251 Fulfilled Research Requirements ..........................................................................252 Research Contributions........................................................................................256 Academic Research Contributions .........................................................................256 Enterprise Designers and Management Consultants..............................................257 Enterprise Decision Makers....................................................................................258 Enterprise Entities .................................................................................................258 Information and Communication Technology Vendors .........................................259 Dissemination and Evaluation ...............................................................................259 Research Limitations............................................................................................261 Research Scope .......................................................................................................261 Practical Application of Research Artefacts..........................................................261 AE Survey ..............................................................................................................262 Lack of Measurement Scales ..................................................................................262 Future Research...................................................................................................263 Proposed Research Artefacts..................................................................................263 AE Empirical Research and Measurement Scales..................................................264 Concluding Remarks ............................................................................................264 Appendix A Delphi Study Round One…………………………………………………………………….267 Appendix A1 Ethics Approval………………………………………………………………………………268 Appendix A2 Participant Information Sheet…………………………………………………………...269 Appendix A3 Pilot Tests………………………………………………………………………………………271 Appendix A4 Invitation to Participate…………………………………………………………………..273 Table of Contents xii Appendix A5 Round One Questionnaire…………………………………………………………………274 Appendix A6 Raw Responses………………………………………………………………………………..279 Appendix A7 Nuggets…………………………………………………………………………………….……291 Appendix A8 Mind Maps…………………………………………………………………………………….306 Appenxix B Delphi Study Round Two……………………………………………………………………327 Appendix B1 Pilot Tests………………………………………………………………………………………328 Appendix B2 Invitation to Participate…………………………………………………………………...329 Appendix B3 Round Two Questionnaire…………………………………………………………………330 Appendix B4 Ratings and Consensus Analysis…………………………………………………………354 Appenxix C Delphi Study Round Three………………………………………………………………….358 Appendix C1 Invitation to Participate……………………………………………………………………359 Appendix C2 Round Three Questionnaire………………………………………………………………360 Appendix C3 Ratings and Consensus Analysis………………………………………………………..373 Appenxix D Survey Questionnaire………………………………………………………………………….376 Appendix D1 Participant Information Sheet……………………………………………………………377 Appendix D2 Pilot Test Invitation and Instructions…………………………………………………378 Appendix D3 Survey Questionnaire……………………………………………………………………….379 Appendix D4 Descriptive Statistics Analysis…………………………………………………………..394 Appenxix E Survey Factor Analysis……………………………………………………………………….419 Appendix E1 EFA………………………………………………………………………………………………420 Appendix E2 CFA………………………………………………………………………………………………430 Appendix E3 SEM………………………………………………………………………………………………439 Appendix E4 Stratified SEM………………………………………………………………………………..458 References……………………………………………………………………………………………………………...518 List of Figures xiii List of Figures Figure 1.1: Sources of Uncertainty (Dale, 2007) .................................................................... 1 Figure 1.2: Time to Execute (Dale, 2007).............................................................................. 2 Figure 1.3: Deliberate and Emergent Strategy (Mintzberg, Quinn & Goshal, 1998).............. 4 Figure 1.4: The Research Lacunas......................................................................................... 8 Figure 1.5: Structure of Thesis.............................................................................................12 Figure 2.1: The MIT90s Framework (Scott Morton, 1991)...................................................14 Figure 2.2: Structure of the Literature Review.....................................................................16 Figure 2.3: Deliberate and Emergent Strategy (Mintzberg and Waters, 1985)......................18 Figure 2.4: Traditional Strategy Planning System (Lewis & Loebbaka, 2008)......................19 Figure 2.5: Change Process (Lewin, 1958) in an operational BP Improvement Context .......23 Figure 2.6: Organisational Decision-Making as Anarchical Being Driven by Event ..............25 Figure 2.7: Balance of Flexibility and Stability ....................................................................26 Figure 2.8: Traditional and Adaptive Organisational Structures (SAP, 2000a) ....................29 Figure 2.9: Systems in an organisation (Scheer, 1998)..........................................................30 Figure 2.10: The SAP R/3 System and the Business Framework (Buck-Emden, 2000)........31 Figure 2.11: Fundamentals of EDA (Van Praag, 2007) ........................................................33 Figure 2.12: IBM Layers of the SOA Reference Architecture (Arsanjani, et al. 2014) ..........33 Figure 2.13: Evolution of BPM (Dale, 2007) ........................................................................34 Figure 2.14: Fundamentals of SOA and ESOA (Van Praag, 2007).......................................35 Figure 2.15: Oracle’s Adaptive System Architecture (Oracle, 2006) .....................................36 Figure 2.16: Principles of Composite Design (Sun, 2008)......................................................37 Figure 2.17: Real Time Enterprise Architecture (Van Praag, 2007) .....................................37 Figure 2.18: Strategic Management Process (SAP, 2000b) ...................................................38 Figure 2.19: Business Process Transformation Roadmap (Klueckmann, 2008)......................39 Figure 2.20: Emergent Transformation Model (Keen, et al., 2006).......................................40 Figure 2.21: Micro Level Transformation Process (Bischoff, 2008) .......................................40 Figure 2.22: Model Driven Business Transformation Framework (Kumaran et al., 2007).....42 Figure 2.23: Four Steps of an Adaptive Business Network (Heinrich & Betts, 2003)............43 List of Figures xiv Figure 2.24: Overview of Practical Issues, Overarching Research Problems, and Key Research Requirements....................................................................................................50 Figure 3.1: Structure of the Research Methodology Chapter ................................................52 Figure 3.2: The Interplay between Exploratory Qualitative Research and Explanatory Quantitative Research........................................................................................56 Figure 3.3: The Multi-methodological Research Roadmap....................................................60 Figure 3.4: A Multi-Methodological Approach to IS Research (Nunamaker et al., 1990)......61 Figure 3.5: A Multi-Methodological Approach to IS Research (Hevner et al., 2004).............62 Figure 3.6: Multi-Methodological Cyclical Approach of the AE Research (adapted from Nunamaker et al., 1990, and Hevner et al., 2004)...............................................64 Figure 3.7: The Three Research Cycles of Observation, Theory Building, and Validation....66 Figure 4.1: Structure of the Exploratory Delphi Chapter .....................................................72 Figure 4.2: The Analytical Process of Round One Delphi adapted from Miles et al. (2014)..87 Figure 4.3: The Creation of Initial AE Concepts and Models Through Analysis and Synthesis of Delphi Results and Literature ........................................................................89 Figure 4.4: Excerpt of Analysis Step 3 and Step 4 for Round One Question 2 Responses.....91 Figure 4.5: Excerpt of Analysis Step 6 for Round One Question 2 Respones........................93 Figure 4.6: Outline of the Analysis Process for Delphi Round Two ....................................104 Figure 4.7: Process Map of the Determine Consensus Process Including the Criteria for the Delphi Study....................................................................................................107 Figure 4.8: Outline of the Analysis Process for Delphi Round Three..................................128 Figure 5.1: Structure of the Survey Chapter.......................................................................134 Figure 5.2: Hypothesis Models of the Adaptive Zone..........................................................136 Figure 5.3: Hypothesis Models of the Behavioural Flows....................................................136 Figure 5.4: The Multi-Dimensional Constructs of the Survey Research..............................146 Figure 5.5: Hypothesis Models of the Adaptive Zone and the Construct Measures.............147 Figure 5.6: Hypothesis Model of the Behavioural Flows and the Construct Measures ........148 Figure 5.7: The Phased Analysis Process for the Survey ....................................................158 Figure 5.8: The Factor Analysis Process (adapted from Furr, 2011) ..................................163 Figure 5.9: The Screeplot of Factors Underlying the EFA AZM Model Dataset.................167 List of Figures xv Figure 5.10: The Screeplot of Factors Underlying the EFA BFM Model Dataset...............172 Figure 5.11: The CFA Process adapted from Furr, 2011 ....................................................175 Figure 5.12: The a priori CFA AZM Model and the Hypothesised Relationships...............176 Figure 5.13: The a priori CFA BFM Model and Hypothesised Relationships.....................182 Figure 5.14: The SEM Process adapted from Furr, 2011 ....................................................187 Figure 5.15: The SEM AZM Model Path Structure and Parameter Estimates ...................189 Figure 5.16: The SEM AZM Model Partially Mediated Path Structure and Parameter Estimates........................................................................................................191 Figure 5.17: The SEM BFM Model Path Structure and Parameter Estimates ...................192 Figure 6.1: Structure of the Discussion Chapter.................................................................204 Figure 6.2: The Incremental Development of the AE Models .............................................206 Figure 6.3: The Key Components Model ............................................................................208 Figure 6.4: The Structural Elements Model........................................................................213 Figure 6.5: The Behavioural Flows Model ..........................................................................214 Figure 6.6: The Adaptive Zone Model................................................................................216 Figure 6.7: The Adaptive Enterprise Transformation Cycle ...............................................218 Figure 6.8: The Enablers, Disablers, and Characteristics of AE .........................................220 Figure 6.9: The Concepts and Models Subjected to Hypothesis Testing .............................230 Figure 6.10: The Adaptive Zone Hypothesis Models Constructs and Variables ..................231 Figure 6.11: The Redefined Constructs and Variables of the Adaptive Zone Hypothesis Models ..........................................................................................................................................232 Figure 6.12: The Behavioural Flows Hypothesis Models’ Constructs and Variables............238 Figure 6.13: The Behavioural Flows Hypothesis Models Redefined Constructs and Variables ..........................................................................................................................................239 Figure 7.1: Structure of the Conclusion Chapter ................................................................245 Figure 7.2: The Research Lacuna .......................................................................................247 List of Tables xvi List of Tables Table 2.1: Issues, Problems and Requirements (IPR) Categories..........................................45 Table 4.1: Delphi Round One Response Rates......................................................................83 Table 4.2: Delphi Round One Organisation Types ...............................................................84 Table 4.3: Delphi Round One Organisational Headcount .....................................................85 Table 4.4: Step 1 Prepared Text Data From Expert E26 .....................................................90 Table 4.5: Step 2 Nuggets From Expert E26 Based on Table 4.4 .........................................90 Table 4.6: Delphi Round Two Response Rates...................................................................101 Table 4.7: Delphi Round Two Organisation Types.............................................................102 Table 4.8: Delphi Round Two Organisational Headcount...................................................102 Table 4.9: Rating Scale Values for Delphi Study................................................................105 Table 4.10: Round Two Statistics of the AE Structural Elements Model ...........................110 Table 4.11: Round Two Verbatim Comments on the AE Structural Elements Model ........111 Table 4.12: Round Two Statistics of the AE Behavioural Flows Model..............................112 Table 4.13: Round Two Verbatim Comments on the Behavioural Flows Model.................114 Table 4.14: Round Two Statistics of the Adaptive Zone Model..........................................116 Table 4.15: Round Two Verbatim Comments on the Adaptive Zone Model.......................117 Table 4.16: Round Two Statistics of the Adaptive Enterprise Transformation Cycle .........118 Table 4.17: Round Two Verbatim Comments on the Adaptive Zone Model.......................120 Table 4.18: Delphi Round Three Response Rates...............................................................127 Table 4.19: Delphi Round Three Rerated Factors that Did Not Achieve Consensus ..........129 Table 5.1: AE Hypotheses Tested in the Survey.................................................................138 Table 5.2: AE Survey Industry/Enterprise Types ..............................................................156 Table 5.3: Total Number of Employees in the Enterprise...................................................157 Table 5.4: Number of Employees in the Enterprise Department.........................................157 Table 5.5: AE Survey Country Location of Respondents....................................................158 Table 5.6: EFA AZM Model Four Factor Solution and Factor Loadings............................170 Table 5.7: EFA BFM Model Four Factor Solution and Factor Loadings............................173 Table 5.8: CFA AZM Model Evaluation Indices.................................................................178 List of Tables xvii Table 5.9: CFA AZM Model Measures of Internal Consistency Reliability and Convergent Validity ............................................................................................................179 Table 5.10: Results of the CFA AZM Model Fornell-Larcker criteria test for Discriminant Validity. .........................................................................................................180 Table 5.11: CFA AZM Model Latent Variable Correlations...............................................181 Table 5.12: CFA BFM Model Evaluation Indices...............................................................183 Table 5.13: Measures of Internal Consistency Reliability and Convergent Validity ............184 Table 5.14: Results of the CFA BFM Model Fornell-Larcker criteria test for Discriminant Validity ..........................................................................................................185 Table 5.15: CFA BFM Model Latent Variable Correlations...............................................185 Table 5.16: SEM AZM Model Fittings Path Structure and Parameter Estimates ..............190 Table 5.17: SEM BFM Model Fittings Path Structure and Parameter Estimates ..............193 Table 5.18: SEM AZM Model Stratified Fittings Path Structures and Model Fit Indexes..194 Table 5.19: SEM AZM Model Stratified Fit Fittings Path Structures and Mediation Effect Model Indexes.................................................................................................195 Table 5.20: SEM BFM Model Stratified Fittings Path Structures and Model Fit Indexes..196 Table 5.21: The Two Sets of AE Hypotheses Tested in the Survey ....................................197 Table 5.22: First Set of Hypotheses Tested ........................................................................199 Table 5.23: Latent Variables Standard Estimates and their Significance Used to Test the First Set of Hypotheses.............................................................................................199 Table 5.24: Second Set of Hypotheses Tested.....................................................................201 Table 5.25: The Latent Variables Standard Estimates and their Significance Used to Test the Second Set of Hypotheses.................................................................................202 Table 5.26: Results of Hypothesis Tests .............................................................................203 Table 6.1: Importance Ranking of AE Key Component Factors Strategy and Organisation ..........................................................................................................................................210 Table 6.2: Importance Ranking of AE Key Component Factors Process and Information..211 Table 6.3: Importance Ranking of Sense and Respond AE Enablers...................................221 Table 6.4: Importance Ranking of Flexibility AE Enablers ................................................221 Table 6.5: Importance Ranking of Integrated System AE Enablers....................................222 List of Tables xviii Table 6.6: Importance Ranking of Tools and Technologies AE Enablers............................222 Table 6.7: Importance Ranking of People AE Enablers......................................................223 Table 6.8: Importance Ranking of AE Disablers.................................................................226 Table 6.9: Importance Ranking of AE Characteristics........................................................228 Table 6.10: Results of the Adaptive Zone and Behavioural Flows Hypothesis Tests...........233 Table 6.11: Results of the Behavioural Flows Hypothesis Tests .........................................240 Table 7.1: Structural, Behavioural, and Methodological Research Artefacts.......................254 Table 7.2: Transformational Concepts, Models, and Methodological Research Artefacts....255 Table 7.3: Presentations and Publications of the AE Research...........................................260 Abbreviations xix Abbreviations ABN Adaptive Business Networks AE Adaptive Enterprise/Enterprises AE1 Adaptive Enterprise 1 AE2 Adaptive Enterprise 2 AE2B Adaptive Enterprise 2B AE3 Adaptive Enterprise 3 ARIS Architectures of Integrated Information Systems AVE Average Variable Extracted BAM Business Activity Monitoring BP Business Process/Processes BPM Business Process Management BHP Broken Hill Proprietary Company Limited BRM Business Rules Management CEP Complex Event Processing CFA Confirmatory Factor Analysis CFI Comparative Fit Index CR Composite Reliability DADL Data analysis, Data latency EDA Event Driven Architecture EFA Exploratory Factor Analysis ERP Enterprise Resource Planning ES Enterprise System ESOA Enterprise Service Orientated Architecture FA Factor Analysis FC Fletcher Challenge H Hypothesis/Hypotheses H2H Human to Human H2S Human to System HR Human Resources IB Interwoven Behaviour/Behaviours ICT Information Communication Technology Abbreviations xx IBM International Business Machines IPR Issues, Problems and Requirements IS Information Systems ISS Interwoven Structure and Systems IT Information Technology KPI Key Performance Indicators µ Mean and central tendency MDBT Model Driven Business Transformation ML Maximum likelihood OM Operations Management P P value/calculated probability PAF Principal Axis Factoring PCA Principal Components Analysis R Programming language and software environment for statistical computing RMSEA Root Mean Square Error of Approximation S2H System to Human S2S System to System SAP Systems Applications and Products in Data Processing SC Supply Chain SCM Supply Chain Management SD Standard Deviation SEM Structural Equation Model/Models/Modelling SILR Sense, Interpret, Learn, Respond SOA Service Oriented Architectural SOP Strategy, Organisation and Process SOPI Strategy, Organisation, Process, Information SPSS Statistical Package for the Social Sciences SRMR Standarised Root Mean Square Residual TLI Tucker-Lewis Index URL Uniform Resource Locator USA United States of America z Z score indicates how many standard deviations an element is from the mean Introduction 1 1 Introduction We live in a world that is characterised by complexity, unpredictability, and uncertainty so enterprises that want to compete in the dynamic markets of today need to be able to respond rapidly to the ever increasing rate of change. Change is a natural law and, according to Charles Darwin’s theory of evolution, it is not necessarily the strongest of the species that survives but the species that is best able to adapt and adjust to its changing environment. Darwin’s theory is universally accepted and applicable in that change over time affects individuals and institutions, although in different ways, and some will survive but many will cease to exist. This is clearly evident in today’s markets. Using the Fortune 500 companies list as a measure, over the next 10 years 40% of companies currently on the list will be gone. Enterprises have to deal with complexity, uncertainty and rapid change at both the macro and the micro levels of their external and internal environment (refer Figure 1.1). Figure 1.1: Sources of Uncertainty (Dale, 2007) Some of the changes that have already occurred, and those currently taking place, are: customers are less product loyal and want greater choice and price transparency; there is a global shortage of business talent; investors demand superior returns on their investment and are more active and flexible in the way they invest. In addition to these changes, society has expectations about the way in which business is conducted and companies are required to operate ethically. Public opinion can change rapidly and impact negatively on an organisation’s Introduction 2 profits and sustainability. Added to these issues are the relentless stream of mergers and acquisitions that change an organisation’s environment. The volatility of the business environment not only creates uncertainty but also in the way in which organisations conduct their business (Dale, 2007; Heinrich & Betts, 2003). The change in terms in the way business is conducted means that there is a corresponding change in business models, and the business processes (BP) that support those models. Consumers demand that their orders are filled quickly so this means that BP need to be executed more efficiently (refer Figure 1.2). In addition, as previously mentioned, the tastes of the consumer are changing quickly which results in product lifecycles that once used to be measured in terms of years are now taking only months, if not weeks, to reach maturity or obsolescence. Figure 1.2: Time to Execute (Dale, 2007) One way that the enterprise can respond to the challenges of rapid change at the macro and micro level is to envisage the changes at three levels of abstraction. First, there are macro level changes that have an impact on the enterprise and its strategic direction. Second, there will be macro and micro level changes that can impact on the enterprise’s organisational structure and processes, or the way things are done. And third, changes can occur that affect the information systems (IS) that will be required to implement and support the processes and any change in strategy (Childe et al., 1994). Most enterprises deal with their strategy, organisational structures, BP, and IS separately, rather than using a cohesive approach. A lack of cohesion can result in serious problems for Introduction 3 the enterprise if it is unable to respond quickly and adapt to rapidly changing business conditions (Burnard & Bhamra, 2011; Reeves, et al. 2015). A key reason why enterprises are unable to adapt quickly is that a deliberate rather than emergent approach has been adopted in regard to strategy, organisation, processes, and information (Eisenhardt & Tabrizi, 1995; Reeves & Deimler, 2011). For example, when examining enterprise strategy and the way it is formulated and implemented, it can be identified that the majority of enterprises follow a particular method of strategic thinking. This method is deliberate, as the external environment is first assessed and, based on that assessment, a strategy is then formulated. Once the strategy has been formulated and decided upon it is then communicated to organisational members, after which it is then implemented. After implementation, feedback on the strategy is collected and assessed. Based on the assessment a new strategy cycle of reformulate, communicate, and implementation is followed if required. This strategy process is carried out in an ordered fashion and is, therefore, deliberate. It is so deliberate that if something changes during the process of formulating, implementing, or monitoring the strategy, there is no inbuilt organisational mechanism that allows for a change of plan. Most enterprises use the deliberate strategic approach and so do not have an organisational structure, BP or IS that enables them to react to changes that might occur whilst in the midst of the strategy management lifecycle. So what is the remedy for this? If the deliberate approach is not suited to a changing, or turbulent, environment then could an emergent approach, the ability to react to changing conditions, better serve an organisation? To try and answer this question Mintzberg, Quinn and Goshal (1998) conducted a study and identified that many organisations are so focused on trying to react to the changes in their environment that, for the most part a deliberate strategy, becomes unrealised. While trying to respond to the changing environment, most of the organisation’s realised strategy is purely emergent, but an emergent strategy can lead to chaos due to a lack of direction and control (refer Figure 1.3). So the question needs to be asked: which would serve an organisation better, a deliberate approach or an emergent approach? A growing number of academic and industry writers advocate that neither a deliberate nor an emergent approach is appropriate for today’s Introduction 4 environment, but rather a blend of the two, a deliberate-emergent approach, is best (Eisenhardt & Brown, 1998; Birkinshaw, 2006; Yip & Johnson, 2007). Figure 1.3: Deliberate and Emergent Strategy (Mintzberg, Quinn & Goshal, 1998) For the sake of simplicity and ease of understanding, a deliberate-emergent approach can be defined as an adaptive approach. That an adaptive approach is better than either a deliberate or emergent approach, is supported by Scheer (2007). However, not all authors agree that an adaptive approach is best as some argue that a deliberate approach allows for greater certainty in an uncertain environment (Porter, 2008; Moore, 2011). This highlights that there are opposing views about the best way to deal with the current dynamic environment. In the next section, the practical issues evident in the adaptive enterprise (AE) literature are considered and the associated research problems are identified. The research objectives are then proposed. Practical Issues There is general acceptance amongst today’s business leaders that, arguably, their greatest challenge is how to remain competitive amidst constant turbulence and disruption (Peyret 2007; Peyret & Cullen, 2007; Kotter, 2012; Weichhart et al., 2016). As a consequence, there have been significant efforts to explore the systematic evolution of sustainable, AE systems. Despite this, there is clear evidence that the traditional, hierarchical, enterprise model, characterised by rigid structures and disempowering processes, is still the dominant form, (Chasan, 2014) although this type of enterprise is increasingly less able to survive in today’s turbulent environment (Weichhart et al., 2016). This has not gone unnoticed and many Introduction 5 practitioners agree that an enterprise needs to be an adaptive, evolving system that is sensitive and responsive to its environment in order to meet the challenges of today (Dooley, 1997; Chasan, 2014). Unfortunately, it has not been made explicit how this type of system can be achieved. Practitioners and industry thought leaders have highlighted many reasons why the nonadaptive enterprise form still dominates, albeit in an environment that necessitates change. They offer an array of theories, processes, competencies and technologies that they believe could potentially transform an enterprise and enable superior organisational performance. Enterprise leaders are spoilt for choice in this regard, but the overarching problem is that while an AE is multi-faceted it tends to be viewed from a myopic perspective. For instance, the recent interest in AE is particularly apparent in the IS industry where practitioners are actively engaged in writing about, and promoting, IS and technologies that support adaptive enterprises. Likewise, management consultants emphasise and promote development of an adaptive strategy as being a critical element, along with having an adaptive structure (Leybourn, 2013), and many others focus on the role of the workforce. However, there is little understanding of how organisation structures, processes and systems can be leveraged to support the development of an AE. This means there is a lack of multiperspective, integrated concepts, models and processes to guide practitioners in the design of an AE that can meet today’s challenges (Weichhart et al., 2016; Jeston & Nelis, 2009). Research Questions In order to address the challenge of how to remain competitive in a turbulent and disruptive environment, it is necessary to answer two key questions: What is an Adaptive Enterprise? and How does an enterprise adapt? To answer the first key question of “What is an Adaptive Enterprise?” it is important to know what the characteristics of an AE are and, how an organisation can develop these characteristics (Chasan, 2014). For example, what additional elements need to be added to existing enterprise structures and processes to address growing complexity and rapid change (Kotter 2012) in today’s environment? Is there an optimal organisational design that defines the necessary structures and processes (Leybourn, 2013; Stanford, 2007)? Also, what are the Introduction 6 predominant behaviours of an AE and how do these behaviours influence its purpose (Austrom et al., 2012)? Can an AE be conceptualised as an adaptive system that senses and responds to its environment (Haeckel, 2016)? How does this system operate (Weichhart et al., 2016; Haeckel, 2016)? To answer the second key question of “How does an enterprise adapt?” it would be useful to know what the principal elements of the enterprise transformation process are, as well as the relationships among those elements (Stanford 2007). Including, how does the transformation process work (Uhl & Gollenia, 2016)? Identifying the practical problems that exist about enterprises becoming adaptive reveals a number of interesting research opportunities. To some extent, given the high level of interest in this research topic from practitioners and academics alike, the first key question can be partially answered by conducting a rigorous review of the literature. There is a plethora of academic and industry articles in which some of the fundamental elements of an AE have already been identified. These elements are an adaptive strategy, structures, BP and IS, which then help to define what an AE is. However, the specifics of how an enterprise can become adaptive have still to be determined, as although there are some ideas about what an AE is, there are few ideas expressed about how an enterprise can transform itself to become adaptive. Due to the paucity of conceptual models available to guide this investigation it means that concepts, models and processes first need to be developed then augmented with detail, before being validated, extended and refined. Research Problems Over the past two decades the subject of organisational adaptation as a means to survive has been a research topic for academic researchers. There is plenty of published literature that supports the theme of organisations being adaptive and responding to the changing environment. This literature can be found in several research domains such as management, operations, and IS and within topic areas such as Organisational Learning, Lean Organisations, Organisational Development, Systems Thinking, Service Orientated Architectures and Technologies, Internet of Things, and the like. This plethora of research has culminated in a mature body of knowledge that supports the idea that organisations need to adapt to be able to survive. Introduction 7 While much emphasis is on how organisations learn, and the influence that learning has on an organisation’s ability to adapt, a more recent area of research interest has emerged that focuses on AE (Levchuk et al., 2002; Tallon 2008, Austrom et al., 2012; Kotter 2012, Chasan 2014, Weichhart et al., 2016). Together with this interest a number of definitions for an AE have been proposed. Moitra & Ganesh (2005) defined AE as being highly flexible and having the ability to change, or adjust, in almost real time through altering routines and practices in response to environmental changes. Perhaps a more encompassing definition is articulated in Haeckel’s (2016) seminal book, ‘The Adaptive Enterprise’ where he explains that an AE does not try to predict the future and accurately foretell demand, rather it senses changing conditions (customer needs, market shifts) as they occur and responds to emerging opportunities before they dissipate. The assumption is that an AE has the ability to sense and then respond to the changes in the environment, which then ensures its survival. The preliminary overview of the literature highlights that although there is literature written about adaptive strategy there is a paucity of literature in regard to adaptive organisational structures, adaptive processes and adaptive IS, as it seems that very little research has been carried out in these particular areas. Therefore, how an adaptive strategy can be supported by adaptive organisational structures, adaptive processes, and adaptive IS, needs to be addressed (Quinn, 1980, 1989). There is even less research about how an adaptive strategy can be translated into adaptive organisational structures, BP and IS, so that they are seamlessly interwoven and, conversely, how translated and interwoven adaptive IS, processes, and organisational structures support the adaptive strategy. By interweaving the components seamlessly, an AE can simultaneously utilise a deliberate and emergent approach that will enable it to be adaptive. As previously mentioned, the key problems for this research are inherent in the components of an organisational system, namely strategies, organisational structures, BP, and IS, and how these evolve, and are managed in a deliberate, an emergent or an adaptive way. Adaptive can be broadly defined as an approach that attempts to interweave the components of a deliberate approach with the emergent approach. There are issues with how an adaptive strategy can be translated into adaptive organisational structures and adaptive processes so that it can be implemented by adaptive IS. In addition, there is the issue of how adaptive systems can Introduction 8 support adaptive processes, adaptive structures and adaptive strategies. These issues are illustrated in Figure 1.4 and are the motivators for this research. Figure 1.4: The Research Lacunas In summary, the following overarching research problems have been identified: • A lack of a coherent understanding of AE components, structures, processes, and practices. • A lack of a coherent understanding of adaptive approaches in the management and control of AE behaviours. • A lack of a coherent, understanding of a systems perspective, including the enabling factors, structures, and processes, which support the transformation of an enterprise to an AE (Weichhart et al., 2016). Given the identified research gaps, the research opportunities are both wide ranging and challenging. The objective of this study is to take advantage of the opportunities and produce results that are inter-disciplinary in nature so that a richer understanding of an AE can be used as a platform for further research. Introduction 9 Research Objectives There are two primary objectives of this research. The first is to conduct exploratory research in order to define, gain insight and develop propositions that elucidate the phenomena of an adaptive enterprise. The second objective is to conduct explanatory research so that how the fundamental elements that influence the evolution of an AE can be explained. The purpose of the exploratory research is to define and conceptualise an AE by identifying its inherent elements and characteristics, first through a comprehensive review of the literature, and then by conducting a qualitative research study. The next stage, which is explanatory research, will be to explain the various factors (elements and their interactions) inherent in an AE by first developing and then testing hypotheses. This will enable the creation of empirically tested artefacts to help address the practical issues and research problems, which can be synthesised into the two overarching research questions, “What is an Adaptive Enterprise?” and, “How does an enterprise adapt?” The two primary research objectives, of conducting exploratory and explanatory research, can be delineated into six detailed objectives that will be used to guide the research, and relate to the two overarching research questions. The six research objectives are: Objective one: Identify and develop concepts of an AE. Objective two: Identify the key components of an AE. Objective three: Identify, design and develop structural concepts and models of an AE. Objective four: Identify, design and develop behavioural concepts and models of an AE. Objective five: Identify design and develop transformational concepts and models of an AE. Objective six: Validate the developed concepts and models of an AE. Research Approach Due to the broad nature of the topic being investigated this study will employ a multimethodological approach. The two principle paradigms that determine the research are exploratory qualitative research and explanatory quantitative research. The exploratory and Introduction 10 explanatory research objectives, and associated research methods, require triangulation under the multi-methodological approach so that connections can be made among the research phases and their outputs. A research framework will be implemented to guide the research cycle to facilitate creation of research artefacts that can then be applied in practice. Due to the limited amount of knowledge available in the research domain, an exploratory qualitative study will be conducted. The results from this study will be combined with relevant information gleaned from the literature review, which will then enable the development of AE concepts and models, some of which will be empirically tested by conducting an explanatory, quantitative study, such as a survey after the conclusion of the qualitative study. Research Contributions This multi-disciplinary research will endeavour to provide theoretical and practical insights into the research topic and thus address the research problems that motivated the study. The research is positioned at the nexus of three converging, but often perceived as distinct, disciplines. Research artefacts will be the key outputs that will conceptualise and describe the significant elements and relationships of an AE, and provide a foundation for the development of theory (concepts and models). This will assist those academics and industry practitioners who are interested and engaged in the development of AE as the research artefacts can be both theoretically and practically applied. It is expected that the research findings will contribute to the expansion of knowledge through exploration, appraisal, refinement and description, followed by empirical testing of the structures, behaviours and processes of an AE. It is also expected that stakeholders of an AE can leverage the artefacts to inform, evaluate, implement, develop and continuously transform an AE. There are many potential contributions arising from this research so it is useful to highlight the most important aspects. The key research contributions will be: • To provide an informed discussion about the diverse dimensions, identification, and evaluation of the important elements and concepts of an AE. • The creation and refinement of concepts, models and hypotheses that help in the conceptualisation and illustration of the essential structures, behaviours, and processes Introduction 11 that will contribute toward the development of AE. This includes a model that will demonstrate a proposed transformation cycle that is comprised of a variety of components and their relationships. • Creation of unique research artefacts, representative of the management, operations management (OM) and IS disciplines that improve the domain knowledge about AE. • A robust qualitative research instrument that demonstrates appropriateness and usability for exploratory research and enables concurrent building and testing of theory. The research instrument will illustrate the capacity to simultaneously glean information, test results, and refine and enhance research artefacts. • The development of a quantitative instrument that is suitable for statistical analysis, is discipline independent, and can be applied in different contexts to gain valuable insights about AE phenomena from theoretical and practical perspectives. • Dissemination and validation of the research findings and artefacts through a variety of publications, conference presentations, seminars and research collaboration activities. Structure of the Thesis This thesis has two main parts that are divided into seven chapters, as illustrated in Figure 1.5, and outlined in the following section. The two main parts reflect the principle research method paradigms of exploratory research and explanatory research that will be used in the study. The exploration qualitative and explanatory quantitative research phases are interrelated and consist of ‘Observation and Theory Building’ and ‘Theory Building and Validation’. Chapter One Introduction: This chapter introduces the research including the research background. It also provides an overview of the practical issues and research problems that motivated the study, which, in turn, determined the research questions and objectives. A summary of the main contributions and an outline of the thesis structure are provided at the end of the chapter. Introduction 12 Figure 1.5: Structure of Thesis Chapter Two Literature Review: This chapter provides an account of the relevant literature in the area of AE and presents the theories, concepts and models applicable to the research inquiry. The literature covers the foundational thesis concepts of deliberate, emergent and adaptive in the context of AE structures, behaviours and processes. The chapter ends with a description of the practical and research problems that are identified from the literature and subsequently shaped the research and remainder of the thesis. Chapter Three Research Methodology: This chapter discusses the multi-methodological approach adopted for the research. It begins by outlining the research objectives and justification for the use of the Delphi study to explore the AE phenomena, followed by the survey used to explain and validate the Delphi results. The research design is described, including the method of data collection and analysis. The chapter concludes with an explanation of the research framework adapted for this study. Introduction 13 Chapter Four Exploatory Study: This chapter provides a detailed description of the multiround Delphi study including its design, pilot testing and implementation. The chapter explains the data analysis process and summarises the preliminary results. The purpose of the Delphi was to discover insights about AE in order to understand the phenomena, and create and validate concepts and models. These concepts and models became the inputs to the explanatory phase of the research. The chapter ends with a summary of the Delphi study and exploratory phase of the research. Chapter Five Explainatory Survey: This chapter gives a detailed explanation of the survey research including the data analysis and results of the hypotheses testing. The chapter begins with a discussion on the formulation of the hypotheses that are then tested in order to validate the AE concepts and models. The survey design, pilot testing and administering of the survey are outlined. A description of the statistical factor analysis is then presented together with the results of the analysis and ensuing hypotheses testing. The chapter concludes with a summary of this explanatory phase of the research. Chapter Six Discussion: This chapter provides a discussion of the research findings by first describing and then interpreting the insights gained from the exploratory phase of the research, followed by a discussion of the results from the explanatory phase. As part of the discussion, the research artefacts, namely the AE concepts, models and hypotheses, are described in detail together with their development. The chapter concludes with a summary of the contributions from this research to the body of knowledge about AE. Chapter Seven Conclusion: The final chapter of the thesis is a synopsis of the overall research. It outlines conclusions drawn from the study and discusses the implications of the findings and how they fulfil the research objectives. The theoretical outputs can act as a platform for future academic research and industry practice. The last part of the chapter acknowledges the limitations of the study and makes suggestions for future research. Literature Review 14 2 Literature Review This chapter reviews literature from academic and industry sources about the three different approaches that can be used for managing an enterprise in different environments. The available literature has been examined and discussed with the major contributors being identified. Each of the four critical organisational components that need to be considered when managing an enterprise: strategy, organisational structures, BP and IS are examined in relation to the three management approaches of deliberate, emergent and deliberate-emergent (refer Figure 2.2). Framework and Models Scott Morton (1991) suggests that an organisation can be thought of as a complex system of five interrelated forces and is continually adapting to the influences from the internal and external environment. This interrelated system is comprised of the organisation’s strategy, structure, processes, individuals and their roles, and technologies. These particular elements, along with the influences of culture, and the external socioeconomic and technological environments, enable the organisation to function and evolve (refer Figure 2.1). Figure 2.1: The MIT90s Framework (Scott Morton, 1991) Literature Review 15 This research suggests that although an organisation can be thought of in terms of Scott Morton’s MIT90s Framework (1991), it also has the ability to generate strategies that can be translated into adaptive structures and adaptive processes. These adaptive structures and processes should be populated by adaptive individuals in composite flexible roles, and the five elements should be supported by IS with the inbuilt capability to adapt (Quinn, 1980, 1989; Jackson, 2016). The elements of strategy, organisational structure, BP, and IS will be used as an overarching structure when discussing the various approaches to the management of these elements in an AE (Chang, 2016). While there are many available frameworks that include the elements of strategy, organisational structure, BP and IS, most of the frameworks focus on internal alignment of the elements (for example Kumaran et al 2007). This alignment is a predetermined deliberate orientation with cyclical monitoring and adjustment by the organisation. So although there may be some sensing of the external environment the alignment cycle does not take this into account (Quinn 1980, 1989; Haeckel, 1995, 2004, 2016). The alignment is carried out in a deliberate way and management orientation is therefore considered to be a deliberate approach. This deliberate approach is acknowledged by Scheer (2007) but he suggests that organisations may be better to balance this with an emergent approach, rather than using a purely deliberate approach, to offset a dynamic internal and external environment. He proposes a model that illustrates the intensity of control versus connectivity of the organisation’s internal and external groups (refer Figure 2.7). The model suggests that those organisations with traditional, top down, hierarchical management structures have high levels of intensity of control and low connectivity. Thus, these organisations are inflexible, and succeed in an environment that is stable and, as such, follow a deliberate approach. In contrast, there are organisations at the bleeding edge that are reactive and flexible, with low intensity of control and high levels of connectivity. Thus, these organisations follow an emergent approach. Scheer (2007) suggests that the best place to be is on the “edge of chaos” where organisations balance their flexibility and stability which equates to a balance between the deliberate and the emergent approaches, as proposed by Mintzberg (1994). This position is deliberate- Literature Review 16 emergent and is considered to be adaptive approach. The view that organisations should utilise this adaptive approach is echoed by Eisenhardt and Brown (1998). There are two key dimensions when discussing the current literature. One dimension is about the elements of an enterprise (Whittle & Myrick, 2016) and the other is about the deliberate versus emergent orientation by management. Scott Morton’s (1991) MIT90 framework is used as an overarching structure for the literature review and each of the four key elements he proposed by him: strategy, organisational structure, BP and IS is examined using both the deliberate and emergent approach. The second dimension is the concept of deliberate and emergent that was introduced by Mintzberg (1994) (refer Figure 2.3) and referred to, albeit in slightly different terms, by Scheer (2007) with his ideas about stability and flexibility (refer Figure 2.7). The literature review is organised using the two dimensions of organisational components and management approaches, and can be illustrated using separate axes as seen in Figure 2.2. In the following sections, each of the elements of the enterprise will be discussed in terms of deliberate, emergent and deliberate-emergent/adaptive. Figure 2.2: Structure of the Literature Review Literature Review 17 Strategy A strategy is what an organisation wants to achieve sometime in the future. As part of formulating a strategy, the organisation first examines both its internal and external environments. Academics and industry experts have identified, over the years, the many ways in which strategy can be formulated and managed, although few of the techniques have endured in terms of uptake and longevity. The different strategic approaches can be categorised by a prescribed method that can be used by the organisation: a) develop and implement a strategic plan; b) choose an attractive industry or the best position or strategic group in an industry; c) use a generic strategy; d) create a strategy that leverages resources and capabilities; e) overcome the competition by being relentless in making fast strategic changes and f) diversify (Yip & Johnson, 2007). However, the success of using a prescribed approach when formulating strategy is contingent on the organisation operating in a relatively stable environment, as if circumstances change the routine a static strategic approach becomes obsolete (Yip & Johnson, 2007). More recently, due to the ever increasing rate of change, both industry managers and academics are struggling to develop an approach that allows an organisation to maintain its chosen strategy while still giving the necessary flexibility to avoid organisational decline in the current changing environment (Eisenhardt & Brown, 1998; Birkinshaw, 2006; Yip & Johnson, 2007). In this section, the concept of strategy will be explored and discussed. Ocasio and Joseph (2008) define strategy as “a framework, either implicit or explicit, that guides the organisation’s choice of action” and suggest that strategy is both “planned and emergent, resulting from strategic design, the evolution of a pattern of decisions, or a combination of the above” (p. 250). However, this is a broad view of strategy. The Oxford Dictionary defines strategy as a “plan designed to achieve a particular long-term aim” (Fowler, et al., 2011). However, Mintzberg (1987) suggests that in a business context a strategy is more than just a plan and that it is a “pattern” that is found in a stream of actions, a market position, and the organisation’s perspective. He defines strategy as a pattern that is, Literature Review 18 “consistency in behaviour over time”, (Mintzberg et al., 2005, p. 9) while a position is where the organisation is located in the market in terms of the products it produces. The organisation’s perspective is the way that the organisation decides to conduct its business (Mintzberg, 1994). None of the definitions explicitly mention either change or adaptation. The world is supposed to hold still while a plan is being developed and then stay on the predicted course while the plan is being implemented” (Mintzberg, 1994, p. 110). Mintzberg and Waters (1985) introduced the idea of a strategy as consisting of two concepts: deliberate strategy and emergent strategy. A deliberate strategy is when the organisation develops a strategic plan that is realised as intended whereas an emergent strategy is an unintended set of consistent actions that form patterns of behaviour over time. These patterns of behaviours are also realised (refer Figure 2.3). Each strategy type (Deliberate and Emergent) will be discussed. Figure 2.3: Deliberate and Emergent Strategy (Mintzberg and Waters, 1985) Deliberate Strategy A deliberate strategy is a strategy that is carefully planned and then controlled by the organisation (Chan & Mauborgne, 2009). Traditional strategies are deliberate strategies. This type of strategy begins with an idea, a plan is then developed, the plan is communicated, and some form of action(s) follows. The purpose of a traditional, or deliberate, strategy, is to create and maintain a long term definable position that results in competitive advantage within the market (Mintzberg & Waters, 1985; Eisenhardt & Brown, 1998). However, as a traditional strategy requires a high level of planning and control, it leads to implementation delays and Literature Review 19 thus, in turbulent environments, underperformance by the organisation (refer Figure 2.4) (Lewis & Loebbaka, 2008; Cravens, et al., 2009). Figure 2.4: Traditional Strategy Planning System (Lewis & Loebbaka, 2008) A seminal and definitive work on corporate strategy, “Competitive Advantage” (Porter, 1980), discussed what is now known as Porter’s Strategy Models. Almost 40 years later these strategy models are still used as a reference by both academics and industry (Baltzan & Phillips, 2008; Porter, 2008). The models are based on the organisation’s examination of the industry in which it operates, using five forces that have been identified by Porter as being critical in the choice of an appropriate strategy. Based on this examination, or industry analysis, the organisation then identifies one of four generic strategies that best reflects the structure of its industry and the company’s position within that structure. It could be argued that Porter’s work on competitive strategy focuses on only deliberate strategies as his definition is that a formal corporate strategy “provides a coherent model for all business units and ensures that all those involved in strategic planning and its implementation are following common goals” (Porter, 1980, p. 12). He further states “competitive strategy is about being different” as “It means deliberately choosing a different set of activities to deliver a unique mix of value” (Porter, 1998, p. 77). By building on the work of Andrews (1971), and Christensen, Andrews and Bower (1978), Porter posits that strategic planning should articulate a calculated, planned approach. Given that Mintzberg’s concept of a deliberate strategy is a “planned and positioned approach” Literature Review 20 it is therefore suggested that Porter’s strategies are deliberate as he argues for the need to take a planned, calculated and competitive market position. Emergent Strategy Emergent strategies are strategies that have developed as part of a “pattern in a stream of actions” (p. 265) and are divorced from any preconceived plan (Mintzberg, 1987; Liedtka, 1998; Hamel & Prahalad, 2005). This type of strategy is when the organisation is responsive to the environment in order to maintain its competitive position. Bonnet and Yip (2009) refer to strategic agility, the ability of the organisation to constantly, “sense, assess and react to market conditions” (p. 52). They suggest that, rather than the notion of sustainable competitive advantage, it is strategic agility that is necessary in today’s dynamic markets. The foregoing has discussed the concepts of deliberate strategy and emergent strategy in their pure form. A pure deliberate strategy is when the organisation proposes, and then locks itself into, a course of action toward a future destination that it ultimately reaches. In contrast, the pure form of an emergent strategy lacks intent but, despite lack of intent, there is “order and consistency over time.” (Mintzberg & Walters, 1985, p. 258). “The key concept of a deliberate, intended strategy (plan and position) and emergent, unplanned strategy (as a pattern in a stream of decisions) lie at each end of the continuum of strategy formation” (Graetz, 2002, p. 456). The main difference between deliberate and emergent strategy is the degree of intent about future action. An organisation is unlikely to follow a purely deliberate strategy as its ability to react to market changes would be hampered by its rigid strategic planning processes. An organisation is also unlikely to follow a purely emergent strategy because although the relentless, fast track execution of incremental strategies might lead to a large source of value creation for those organisations in fast changing industries, it will eventually result in chaos (uncontrolled change). Maintaining competitive advantage by quickly changing strategy might work in the short term but any competitive advantage would be difficult to sustain because the basis to the advantage is short lived (Yip & Johnson, 2007). However, the organisation, does not have to choose one of the pure, or extreme forms, of strategy but can instead use a strategy from somewhere else on the continuum. Literature Review 21 Adaptive Strategy There are not many currently used strategies that are purely deliberate or purely emergent (Mintzberg, 1994). The reason is that a purely deliberate strategy implies that no learning takes place during implementation and a purely emergent strategy suggests that there are no boundaries or limitations and so the strategy is uncontrolled. “Sometimes strategies must be left as a broad vision, not precisely articulated, to adapt to a changing environment” (Mintzberg, 1994, p. 112). Despite this, the very “concept of strategy is rooted in stability” (Mintzberg, Ahlstrand & Lampel, 2005, p. 32). Yip and Johnson (2007) argue that an effective strategy is one that is derived from both routine as well as transformational strategic change. The idea is that the organisation uses its current routine approach, and exploits its source of competitive advantage, but interweaves it with a transformational strategic approach (Andersen & Nielsen, 2009). Quinn (1993) suggests that an effective strategy comes from creating a process that is flexible, adaptive, dynamic and opportunistic. An effective strategy making process cannot, therefore, rely on rigid methodologies and predetermined routines. And so to create an effective strategy it requires an interwoven approach that is a blend of the traditional deliberate strategic approach with an emergent approach involving intuitive, strategic thinking (Graetz, 2002; Joseph & Clark, 2007). A strategy that results in a competitive advantage today may not result in a competitive advantage tomorrow as so much is happening at an ever increasing speed. “Competing on the edge” (Eisenhardt & Brown, 1998, p. 788) refers to a strategic approach that requires the organisational ability to change constantly over time in response to a relentlessly changing environment. According to Eisenhardt & Brown (1998) this type of strategic approach has the three key characteristics of being “temporary, complicated, and unpredictable” (p. 787). Reacting to the environment is important but anticipating and even setting the pace of change is more so, as time pacing is relevant to strategy. Therefore, strategy involves “successfully balancing on the edge of time between the past and future” (Eisenhardt & Brown, 1998, p. 788). However, time is not a major issue in traditional strategic approaches whereas it is central to a “competing on the edge” (p. 788) emergent strategy. When there is a blend of Literature Review 22 both, a deliberate and an emergent strategy, time encompasses the notion of “stretching out the past” (p. 789) together with “probing into the future” in order to obtain a strategy that is both deliberate and emerging (Brown & Eisenhardt, 1997; Eisenhardt & Brown, 1998). Business Processes Given the rapidly changing demands within the business environment and the many challenges that this uncertainty brings, organisations are focusing on BP to be able to meet environmental and organisational requirements (Chang, 2016). BP, as defined by Sharp and McDermott (2009), “are a collection of interrelated activities, initiated in response to a triggering event, which achieves a specific, discrete result for the customer and other stakeholders of the process” (p. 56). Given that a process exists to serve the customer it can therefore be perceived as a key to sustainable competition. Keen (1997) seems to have a similar viewpoint and purports that there are four main reasons for viewing BP improvement as a strategic imperative. The first reason is that organisations have the propensity to be far more adaptable than previously thought. Second, the changing nature of change requires organisations to balance and “compete on the edge”, which lies between flexibility and stability. Third, BP make a major contribution to the development of enterprise specific, dynamic capabilities (competencies) that can be arguably considered as the source of sustainable competitive advantage. Fourth, the significant effect of Information Technology (IT) and its ability to radically the lower coordination costs and transaction costs. Nevertheless, most methodologies for Business Process Management (BPM) advocate a structured deliberate approach that requires a lot of time, money and effort but often with only mediocre gains (Billington & Davidson, 2008). Deliberate BPM is increasingly being recognised as an integral part of enterprise transformation. It is characterised by a transformation of the enterprise’s interrelated subsystems and routines (Kettinger, Teng, Guha, 1997; Benner & Tushman 2003; Benner, 2009). A number of frameworks have been proposed by academics and consultants to manage the transformation of BP with most frameworks being deliberate in their approach. A framework of note is the Literature Review 23 Process Lifecycle (Rosemann, 2001; Kalakota & Robinson, 2003; Portougal & Sundaram, 2006; Forster, 2006). The cycle starts from a high-level business orientation and then effectively moves down to the process level. It follows a stepwise approach from the initial identification of the current as is process through to the development of the improved to be process, which is then implemented. The cycle is completed through the monitoring and control of the new and improved process. Adopting a lifecycle process management approach enables the enterprise to engage in cycles of continuous improvement and to sense, respond and adapt to the changing environment, both internal and external. (Portougal & Sundaram, 2006). However, the sense and respond adaption cycle, using the process lifecycle management approach, is a static approach because it does not accommodate mid-cycle change. Therefore, the sense, respond and adapt aspect of the life cycle orientation is limited by the very nature of the cycle itself. The deliberate orientation of the process lifecycle management approach is emphasised by Forster’s (2006) adaptation of Lewin’s (1958) framework for change (refer Figure 2.5) Forster (2006) suggests that, “although the model is apparently simple, it is very elegant and a practical guide to make a shift in a complex environment happen” (p. 2). However, the model of unfreeze the current state, followed by a process analysis and improvement (change), and then process implementation (refreeze), is extremely rigid and so does not allow for emergent change to impact (have an effect), due to the strict nature of the cycle. Figure 2.5: Change Process (Lewin, 1958) in an operational BP Improvement Context Literature Review 24 Emergent In today’s dynamic business environment new processes are constantly emerging during the execution of daily business as employees are confronted with an array of new problems and situations that require action. Furthermore, customer demands and expectations are constantly changing, and can vary from case to case, so creating the need for customised solutions. Employees, partners, investors and society are also changing. The term used to describe all these new dynamic, evolving, knowledge intensive BP is emergent BP. A characteristic of these emergent BP is that they cannot be predefined as their models are based on accumulated experience and, as a consequence, evolve from the execution of business events (Marjanovic, 2005; Dale, 2007). Emergent BP are organisational activity patterns and are described by Markus, Majchrzak and Gasser (2002) as processes in which “problem interpretations, deliberations, and actions, unfold unpredictably” (p. 182). Emergent processes can also be defined as self-developing processes (Scheer, 2007). Interestingly, the process of strategy making has been used as an example of an emergent process because it has been described as a collection of thoroughly debated problematic issues that evolve organically as circumstances change (Pava, 1983). Organisational decision making can be used to illustrate the characteristics of an emergent BP. The literature on organisational decision making seems to be divided on conceptualising the decision process, as either a very deliberate and focused sequential process or a disordered “anarchic” process. In the anarchic process model decisions emerge in a tangential fashion from a “black hole” that seemingly has no structure or sequence, albeit happening inconsistently. Hickson (1987) used the form of a vortex to conceptualise the process of organisational decision making as being anarchical. The characteristics of the anarchic model are consistent with the conceptual models of emergent BP as both process models are driven by events (refer Figure 2.6). Adaptive Several authors have used jazz music improvisation as a metaphor for the adaptive management of BP. Scheer (2007) explains that a good jazz group, made up of skilled Literature Review 25 musicians (experts), when playing together are constantly communicating in the same time and place. Each musician, while playing, is also listening and responding to others, with particular emphasis on the soloist. Each player responds to the soloist’s development of the melody and who, in turn, is being impelled by the musicians in the group’s rhythm section (typically, piano, bass and percussion). During an improvisation the soloist uses the structure of the music lead sheet (music piece) as a scaffold (harmonic progression) and within that scaffold creates new melodies on the spot. Applying the jazz metaphor to the management of BP, the jazz group’s process of improvisation is analogous to a source of constant emergent processes. The scaffold that the soloist uses is determined by the lead music sheet and so is similar to deliberate processes. Therefore, the metaphor of jazz improvisation as suggested by Scheer (2007), can be used to illustrate the management of adaptive processes. Figure 2.6: Organisational Decision-Making as Anarchical Being Driven by Event Majchrzak et al (2006) also used the jazz metaphor when discussing the management of BP. Yet, the authors argue that the jazz metaphor does not apply to emergent BP as suggested. They suggest that the managers, the work experts, who lead and guide the knowledge workers engaged in emergent work, should instead focus on making the connections and facilitating the building of expertise as the work emerges. This idea can also be applied to the adaptive process. The balance of flexibility and stability can be used to illustrate important aspects of adaptive process management (Scheer, 2006). Figure 2.7 shows a simple representation of how low levels of connectivity and high levels of control (stability with no improvisation) prevent flexibility and creative behaviours. The rigid organisational structures and rules, and lack of Literature Review 26 communication and interaction, mean that the work processes are set, and people are isolated. Therefore, management of processes in this area, seen at the bottom left of the figure are deliberate and the enterprise will be unable to react to change in a timely way due to low connectivity with high intensity of control. Figure 2.7: Balance of Flexibility and Stability The model suggests that organisations with traditional top down hierarchical management structures have high levels of intensity of control and low connectivity. These organisations are therefore inflexible and can only succeed in a stable environment and, as such, follow a deliberate approach. Examples of this type of approach being used are the automobile conglomerates of General Motors and Chrysler. However, there are organisations at the “bleeding edge” where organisations are reactive and the connectivity between parties, and with the external environment, is very high. These organisations are continually sensing the environment and attempting to respond to identified changes. However, these types of highly reactive and volatile organisations, such as high tech, startup companies, have low levels of intensity of control, and so would populate space III as shown on the model, an emergent approach (Scheer 2007). It is considered that neither of these extreme positions, deliberate or emergent, is good (Quinn, 1985; Eisenhardt & Brown, 1998; Scheer, 2007). The extreme position I is characterised by Literature Review 27 deliberate, deliberation and stability. The extreme position III is characterised by chaos, flexibility, possibly innovation, and even anarchy. However, Scheer (2007) suggests that the best place to be is on the edge of chaos where an organisation can balance flexibility and stability, which is position IV on the model. Being on the edge of chaos equates to adopting the adaptive approach, as defined in this research. The adaptive approach is when an organisation has a deliberate approach that supports stable evolutionary growth combined with a flexible approach that supports more opportunistic growth (Mathiesen et al., 2011). It can therefore be argued that it is not a choice between deliberate or emergent but that an organisation should adopt a deliberate-emergent approach, which is an adaptive approach. Furthermore, it is not a choice between being stable or flexible it is about being stable and flexible. This research suggests that an organisation develop the ability to be deliberate and emergent, stable and flexible and so be an AE. Organisational Structures An organisational structure exists purely for management and control purposes. A structure defines and portrays the work roles and how the organisation’s activities are grouped together (Lasher, 2005). Fritz (1996) explains that an organisational structure is more than just a management reporting structure as it is an “entity (such as an organization) made up of elements or parts (such as people, resources, aspirations, market trends, levels of competence, reward systems, departmental mandates, and so on) that impact each other by the relationship they form” (p. 4). When considering the control of organisations, it is routinely executed through explicit and implicit management structures as well as behavioural rules and regulations. These management controls are usually facilitated through the flow of information to the various parties involved and command and control directives wherever appropriate (Schulz, 2001; Yanine et al. 2016). These information and control flows, over a period of time lead to organisational learning (Levitt & March, 1988; Bontis et al, 2002; Siemens, 2005; Senge, 2006; Sujan & Furniss, 2015). The way an organisation is structured has implications for how strategy is translated throughout the organisation and ultimately how the organisation performs (March, 1991). Literature Review 28 Roberts (2004), posits that “certain strategies and organizational designs do fit one another and the environment, and thus produce good performance, and others do not” (p. 32). Deliberate Many entities are structured as a functional organisation that supports a deliberate approach. Bryan and Joyce (2007) argue that most organisations have been designed for a past industrial age where vertical, integrated structures were designed for efficient operations. This type of structure exhibits high levels of hierarchical authority and control and, as already mentioned, is suited to a stable environment (refer Figure 2.8). Labovitz and Rosansky (1997) suggest that the traditional, hierarchical organisational structure is designed so that managerial tasks can be broken into pieces: departments and divisions. This segmentation of the organisation makes it difficult to integrate strategy, BP and systems into a cohesive working whole and, as a consequence, the organisational structure then becomes a barrier to change and improved performance. Emergent An emerging organisational form, where employees interact with each other almost completely using telecommunication systems is a virtual organization structure. This virtual organisation structure allows for high levels of connectivity, both amongst the individual members of the organisation and with the environment. Because of this it is an extremely flexible structure that allows the organisation to be reactive and innovative. Adaptive As mentioned, an adaptive organisational structure supports both the emergent and deliberate management orientation and its form is a matrix structure (Chang, 2016). The matrix structure consists of vertical management and control lines of a product orientated structure which is combined with the horizontal lines of a functional structure. The matrix structure allows an adaptive approach as it supports a project orientation by bringing together talented individuals from different parts and levels of the organization (Jackson, 2016). As such, the Literature Review 29 management reporting lines are flexible as individuals can report to both a line manager (vertical) and a process manager (horizontal) as illustrated in Figure 2.8. Figure 2.8: Traditional and Adaptive Organisational Structures (SAP, 2000a) Information Systems To effectively support an enterprise’s BP, and in turn its business strategy, an integrated IS infrastructure is absolutely essential. Figure 2.9 illustrates some of the various IS that support processes in an enterprise (Scheer, 1998; Weichhart et al., 2016). As can be seen at the bottom of Figure 2.9 there are ‘quantity oriented operative systems’ implemented in functional areas such as purchasing, production, sales and marketing and personnel placement (human resource management). At the higher level there are ‘value orientated accounting systems’ to support accounts payable and accounts receivable, inventory accounting, personnel accounting and fixed asset accounting. At the third level there are “reporting and controlling systems” such as purchasing controlling, production controlling, sales and marketing controlling, personnel controlling and investment controlling. At the fourth level there are ‘analysis and information systems’ for example, purchasing, and production, sales and marketing, and personnel IS. Finally, at the top of the pyramid, there are the long term planning such as strategy and decision support systems. Most of the systems depicted in Figure 2.9 are traditional application Literature Review 30 systems that are used to monitor processes (not activities) right through to the lowest level of the organisation. Figure 2.9: Systems in an organisation (Scheer, 1998) However, the ‘Systems in an Organisation’ framework (Scheer, 1998) is not a technological response to an enterprise’s IS requirements, it only forms a representative landscape of the type of integrated systems that are essential for an organisation at both the horizontal and vertical (functional) levels of the organisation’s structure. The framework does not infer that the systems at each level of the organisation should be used in a deliberate way or in an emergent way, or as a combination of both so that the systems are adaptive. Deliberate Information Systems Over the past twenty years, enterprises have implemented enterprise resource planning (ERP) systems. These are integrated, enterprise wide systems and are a technological response to the integrated systems environment as proposed by Scheer (1998). These ERP systems have now replaced the stand-alone business IS applications in many enterprises (Al-Mashari & AlMudimigh, 2003). These systems are predominantly based on a deliberate approach to the Literature Review 31 management of an organisation’s transaction and BP requirements. They follow a rigid structure as, for example, the SAP R/3 ERP system (refer Figure 2.10), which has been implemented by many enterprises, is a typical traditional ERP system. Implementing this type of system has been likened to “pouring liquid concrete over a company’s IS and business practices” (Upton & McAfee, 2000 p367). In one way, traditional ERP systems are flexible because they can be configured to suit different enterprises across different industries. However, once configured and implemented they are quite difficult to change. This inflexibility means that once implemented they cannot be altered with ease in response to change in the enterprise’s strategy and/or BP (Portougal & Sundaram, 2006). Buck-Emden (2000) pictorially demonstrates a traditional SAP R3 systems architecture and the business framework (refer Figure 2.10). There are two layers, the application layer which is the “core” of the system that incorporates the functional solutions, and the presentation layer. These two layers sit atop the database layer (not shown here). For every industry there is a specific ERP industry solution but they are ‘cookie cutter’ (preconfigured) type solutions. Figure 2.10: The SAP R/3 System and the Business Framework (Buck-Emden, 2000) The traditional SAP R/3 is a reasonably fixed architecture. It is not a “plug and play” environment because it is difficult to add other elements and technologies. In a sense traditional SAP systems and the way in which the system is implemented is an example of a Literature Review 32 deliberate approach. Although the framework implies that the system can connect with other systems (connectivity technology) there is a tension between flexibility and stability. The systems and processes and the organisational structures do not allow for emergent phenomena to be easily supported. While traditional SAP architecture is provided as an example, most traditional application software has a deliberate orientation and it is the deliberate processes and deliberate strategies that are mainly supported. Most software platforms and solutions are quite deliberate, for example Oracle ERP, PeopleSoft and Infor ERP (Tenhiala & Helkio, 2015; Giachetti, 2016). None of them support either the emergent or adaptive approach. It is the more recent architectures that provide support for emergent and adaptive approaches in terms of procedural responses (the way in which the system is implemented) and technological responses (Upton & McAfee, 2000). It is the custom built applications for individual organisations and contexts that are more deliberate in their orientation. Emergent Information Systems More frequently systems that support a purely emergent approach are being seen. These systems consist of what is termed in literature and industry as an Event Driven Architecture (EDA). From a conceptual perspective the EDA components and subsystems are totally decoupled (not dependent on other software applications) and mostly asynchronous. They support event processing in real time and thus support an emergent approach (Koetter & Kochanowski, 2015). Figure 2.11 shows the positioning of EDA with respect to the level of coupling and asynchrony as well as the fundamental characteristics of a pure EDA approach. EDA are supported by a number of different vendor platforms such as IBM SOA Reference Architecture (refer Figure 2.12). This platform supports a purely emergent approach because it does not come with any predefined services but instead provides complete application functionality to create any services as and when required (Arsanjani, et al. 2014). The platform supports the creation of primitive as well as composite services. If something changes then the primitive and/or composite services can either be changed or a new service (primitive or composite) can be created from scratch. Literature Review 33 Figure 2.11: Fundamentals of EDA (Van Praag, 2007) Unlike the SAP and Oracle platforms the IBM platform does not come with a core repository of predefined services to support processes. However, primitive services and composite services can be bought from a service provider such as SAP, Oracle or other external sources on an as needed basis. This makes the platform extremely flexible and reactive because services can either be created from scratch or acquired and plugged in. Therefore, the IBM platform can be viewed primarily as one that supports the modelling, design, management, and deployment of services and, as such, is designed well to respond to emergent behaviour. Figure 2.12: IBM Layers of the SOA Reference Architecture (Arsanjani, et al. 2014) Asynchrony EDA Tightly Coupled De-Coupled Coupling Synchronous Asynchronous Literature Review 34 Adaptive Information Systems When examining the details of SAP’s Evolution of BPM (refer Figure 2.13) it can be seen that all the traditional technologies (process modelling, workflow automation, enterprise application integration) involve a deliberate and rigid approach toward the management of BP. But, according to SAP, the pathway to competitive differentiation is through “Business Network Transformation”, termed as BPM 2.0 and beyond. BPM 2.0 and beyond, will be achieved by flexibly enabling the BP lifecycle through leveraging components and subsystems, such as Business Rules, Business Activity Monitoring and System to System (S2S) BPM. These are all technologies with the express purpose to support emergent behaviour. Figure 2.13: Evolution of BPM (Dale, 2007) However, SAP recognises that the EDA as a stand-alone architecture, that supports the pure emergent approach, is not able to achieve the key business drivers for business transformation on its own. It recognises that an AE also needs modelling, simulation, and analysis, and a balance between human tasks and S2S BPM total automation1 . Moreover, SAP acknowledges that the traditional enterprise architectures (ERP) that only support the deliberate approach 1 In terms of adaptive systems, S2S BPM automation is important because the more S2S there is potentially meaningful events emerge quicker together with adaptation. At the same time, extreme levels of S2S may lead to lack of visibility about when human intervention is required to prevent catastrophic events unfolding. Therefore, there ought to be a balance among H2H (human2human), S2H, or H2S automation. Literature Review 35 are now obsolete. Therefore, SAP proposes an Enterprise Service Orientated Architecture (ESOA) that comprises EDA components/systems and Enterprise Modelling, Simulation and Analysis, and Process collaboration capabilities components/subsystems, which support an adaptive approach. As with SAP, most Enterprise System (ES) vendors are advocating for the implementation of systems based on a service oriented architectural (SOA) approach (Rosen et. al., 2012). SOA is defined as “a technology neutral architectural concept based on generally (re-)useable services” (Herrmann & Golden, 2006). While pure SOA from vendors, such as IBM, could be deemed to lie in the pure emergent quadrant, ESOA offerings from other vendors, for example Oracle and SAP, could be considered to originate from the deliberate but tending toward adaptive (refer Figure 2.14). Figure 2.14: Fundamentals of SOA and ESOA (Van Praag, 2007) There are also hybrid system architectures, such as Oracle’s, that support EDA and as a consequence, the emergent approach. These architectures support Mintzberg’s vision of emergent strategies through the constant monitoring and analysis of meaningful events (patterns of behaviour) within the enterprise’s event cloud. In addition, many of these hybrid systems support the deliberate approach through explicit modules/components/subsystems to support BPM. These systems possess a host of components/subsystems that explicitly support the EDA paradigm, such as Business Activity Monitoring (BAM), Complex Event Processing SOA Asynchrony Synchronous Asynchronous Tightly Coupled De-Coupled Coupling Literature Review 36 (CEP) and Business Rules Management (BRM). Such support for the deliberate (BPM) and the emergent (BAM) is well illustrated by Oracle’s architecture in Figure 2.15. Figure 2.15: Oracle’s Adaptive System Architecture (Oracle, 2006) Generic frameworks proposed by industry heavy weights, such as Sun (2008), reiterate similar principles for the architectures to support adaptive systems (refer Figure 2.16). They emphasise the need for architectures that are process driven, user centric, service oriented and loosely coupled. The loosely coupled aspect helps organisations to be emergent and reactive to what is happening (Weichhart et al., 2016). This loose coupling enables the service chains to rapidly decompose and recompose to support changing process requirements. User centricity emphasises the need for flexible user interfaces that can be personalised, role based, adaptable and adaptive, evolving as the both the users and user needs change (Seders et al., 2016). Highly adaptable process flows enable the rapid composition of new flows as well as modification of existing flows. Such flows can be in the realm of transactions, decisions, and/or collaborations. These four principles can be considered to be the cornerstones of the adaptive architectures. These principles are further reiterated by Van Praag (2007) as illustrated in Figure 2.17. However, Van Praag (2007) adds another pillar to the AE when he suggests that modern distributed real time enterprise architectures will be powered by SOA, as well as EDA patterns Literature Review 37 in combination, as deemed appropriate. He also suggests that real time enterprise architecture will be a fusion of these different approaches. Figure 2.16: Principles of Composite Design (Sun, 2008) The following section reviews several relevant deliberate, emergent and adaptive transformation models, which could potentially facilitate the development of an enterprise to an AE. Figure 2.17: Real Time Enterprise Architecture (Van Praag, 2007) Asynchrony SOA EDA Tightly Coupled De-Coupled Coupling Synchronous Asynchronous Literature Review 38 Transformation Models There are several transformation models and transformation process models, which tend to be more prescriptive, that enable an organisation to realise strategies in terms of organisational structures, BP and IS. In the following sections deliberate, emergent, and adaptive transformation models and processes, primarily proposed by industry, are explored. In the past, academia seems to have shied away from such models and process prescriptions (Korhonen, et al., 2016). Deliberate Transformation Models Transformation models that have been proposed in the literature and by vendors and business consultants, have been, for the most part, deliberate in orientation. Most of the models could be considered rather myopic and unidimensional in character. There is, however, a particular process model that stands out in terms of its cohesiveness and the integration of strategy with BP and IS. It is the strategic enterprise management process as proposed by SAP (refer Figure 2.18). This transformation process, however, is deliberate as the feedback for corrective action occurs only at the end of the strategic management process. Figure 2.18: Strategic Management Process (SAP, 2000b) Literature Review 39 Another transformation process model, is a prescriptive roadmap proposed by ARIS and has similar characteristics but that too is quite deliberate as illustrated in Figure 2.19. Again, the feedback which could give the opportunity to change direction, occurs only at the end of the cycle. There are no intermediate points for reflection and recalibration. Figure 2.19: Business Process Transformation Roadmap (Klueckmann, 2008) Emergent Transformation Models The IBM process model illustrated in Figure 2.20 could be considered an emergent transformation model. This model suggests that requirements are gathered, modelled and simulated and designed. These designs are implemented through the discovery of services, creation of services, and/or purchase of third party services. These services are then composed into support processes. They are tested at the primitive and composite levels and are then deployed through the appropriate integration of people, processes, and IS. The deployed processes and systems are monitored, controlled, and governed on an ongoing basis. As new needs arise, or when these processes and systems go out of sync, it presents an opportunity to model, assemble, deploy, and manage. There is no strategic direction implied in this roadmap. By reacting to whatever circumstances arise means there is a tendency towards the reactive, and so places this particular roadmap is in the emergent quadrant. Literature Review 40 Figure 2.20: Emergent Transformation Model (Keen, et al., 2006) The micro level process modelling cycle from SAP illustrated in Figure 2.21 appears, on the surface, to be a deliberate process model. However, this transformation process has the explicit step of accommodating “new micro processes” after the performance of “strategic process analysis”. This feature thus enables the cycle to be performed many times within a broader macro level emergent and/or adaptive change management cycle. Figure 2.21: Micro Level Transformation Process (Bischoff, 2008) Literature Review 41 Adaptive Transformation Models There is a dearth of transformation models, frameworks, and processes based on the adaptive approach, which address the implementation and support of an adaptive strategy using adaptive organisational structures, adaptive BP, and adaptive IS (Zimmermann et al., 2015). In the absence of such frameworks, two transformation models will be discussed that could be considered partially, adaptive frameworks because of the adaptive elements they contain, and their ability to support the development of an AE. The first model is the Model Driven Business Transformation (MDBT) framework by Kumaran et al., (2007). The MDBT framework is a business transformation methodology that “allows business strategies to be realised by choreographing workflow tools and human activities” (p. 514). The second transformation model that is presented is Heinrich & Betts (2003) “Four Steps of an Adaptive Business Networks” (ABN). Although this model was developed by the authors to support the evolution of an Adaptive Business Network, rather than an AE, many of the ideas and mechanisms contained within the model are particularly relevant to an adaptive approach. Each of the two models will be discussed. The MDBT framework (Kumaran et al., 2007) is a partially adaptive framework because at the business operations layer, which concerns the business owners understanding of the business, key performance indicators (KPIs) are used to monitor and control progress toward achieving the strategic goals of the enterprise. If the performance of the enterprise appears to decline then a correction is made to the composition of the defined business services and strategic KPI’s. After further monitoring, if the correction is ineffective the enterprise then tries to optimise the BP and operations. If that too is ineffective, the organisation realigns its operations to fit the strategy it is trying to follow (refer Figure 2.22). This cyclical alignment and realignment of business operations to strategy through the continual monitoring of business performance, is an example of an adaptive approach, albeit a limited example. However, the MDBT framework is limited in two major ways. First, the framework’s monitoring implications seem to focus only on the current internal processes and signals from the enterprise’s external business environment are not monitored or considered. Second, although there is an implication that business operations are aligned to strategy, there Literature Review 42 is no suggestion of changing the strategy in response to continuing poor performance. That is, there is no evidence of re-strategising. In summary, the framework is adaptive for internal requirements but does not account for processes and routines that might emerge in response to external influences. Figure 2.22: Model Driven Business Transformation Framework (Kumaran et al., 2007) The second model is Heinrich & Betts (2003) “Four Steps of an Adaptive Business Networks” process model, which was developed for the evolution of ABN. However, it too can be considered in the context of an AE. It contains a number of concepts and elements that can apply to an adaptive transformation process model for the development of an AE. The process model consists of four steps that can lead to an organisation being adaptive: Visibility, Community, Collaboration, and ultimately, Adaptability. Each step has its own requirements and outcomes as shown in Figure 2.23. Many of the requirements and outcomes depicted in the model process are indicative of adaptive requisites and behaviours (adaptive elements). For example, achieving the first step of visibility supports the emergent approach by the sharing of information both internally and externally. Also, Step two Community and Step three Collaboration, are prerequisites of the adaptive approach. In addition, all the mechanisms, from visibility to community, to collaboration, to adaptability, involve and support both deliberate and emergent strategies, organisational structures, BP and IS. Literature Review 43 Figure 2.23: Four Steps of an Adaptive Business Network (Heinrich & Betts, 2003) In summary, the MDBT framework (Kumaran et al., 2007), shown in Figure 2.22, and the “Four Steps to an Adaptive Business Network” (Heinrich & Betts, 2003), shown in Figure 2.23, are examples of models and processes that could be construed as adaptive transformational models. Both the MDBT framework, and the Four Steps process have some adaptive elements and are, therefore, examples of partially/incipient AE transformational models. To conclude this review of the literature, the following section will outline the important practical issues, research problems and requirements in the AE research domain that were identified in the preceding discussion. Practical Issues, Research Problems and Requirements It is generally accepted by academics and industry practitioners that operating as an AE is a proven way of surviving in current business environments. A review of the literature indicates Literature Review 44 that the AE approach is evolving and that there are many issues that need to be addressed. Interestingly, the core issues do not stem from the lack of research but rather that the research centers on seemingly myopic, domain dependent approaches to the subject. There is a plethora of literature about AE from many different perspectives, which is not unexpected given AE’s broad appeal, multi-disciplinary nature, wide scope, and complexity. Despite this abundance of literature, there is a lack of a holistic understanding of AE which has led to disparate research efforts that offer only limited solutions. As a consequence, there is a lack of all embracing, systematic, design and development approaches with which to address the requirements of AE, including the necessary concepts, models and hypotheses to support an enterprise’s evolution. The purpose of the following sections is to outline the practical issues, research problems and ensuing research requirements identified from a review of the literature. The issues, problems, and requirements emanating from the research gaps (refer Figure 1.4) are summarised in Figure 2.24. Practical Issues and Research Problems For this part of the research an issue is defined as being an important set of problems or difficulties (Fowler et al., 2011) that need to be identified, clearly understood, and described. Before solutions can be developed a clear understanding of the research domain issues is an important requirement. As mentioned, AE is a multi-disciplinary, wide ranging, and complex topic so a primary requirement is to first develop a holistic, conceptual understanding to gain meaningful insights. For this reason, multi-disciplinary research needs to be conducted so that a comprehensive, conceptual understanding can be offered, as this will generate the greatest value. The aim of this research is to identify issues and related requirements to help solve practical and research problems associated with AE. The variety of issues and problems associated with the design, implementation and development of AE that emerged from the literature review, are of a conceptual, strategic, and procedural nature and were used to inform both the identification and development of the research requirements. Literature Review 45 The issues, problems and requirements (IPR) that were identified from the literature can be grouped into four relevant and interrelated categories: Structure, Behaviour, System Perspective and Transformation (refer Table 2.1). PIR Category Description Structure Issues and problems associated with a lack of consensus on the key components and structures of an AE, which is prevalent in several, disparate, research disciplines. Behaviour Issues and problems associated with developing an understanding of the critical behaviors, behavioral flows, and approaches to the management and control of these behaviors and flows, which enable an enterprise to be adaptive. System Perspective Issues and problems associated with siloed, archetypes that prevail in the design and development of organisations. Systems thinking and cybernetic paradigms are lacking in organisational design. Transformation Issues and problems associated with a lack of concepts, models, processes, and roadmaps to transform a non-AE into an AE. Table 2.1: Issues, Problems and Requirements (IPR) Categories The discussion in sections 2.7.1.1 to 2.7.2 describes these IPR categories and is the basis for the IPR overview shown in Figure 2.24. 2.7.1.1 Structure Related Issues and Problems The literature review revealed a paucity of knowledge about the key dimensions and practices of an AE (refer section 2.1). Researchers from multiple disciplines have suggested a disparate set of organisational components as being the important components of an AE (Chang, 2016). Hence, there is little inter-disciplinary consensus on which organisational components are the key components of an AE. Furthermore, it is evident that there is insufficient knowledge, and an ensuing understanding, about approaches to the design and development of strategies, structures, processes, and IS that enable an enterprise to be adaptive (Kumaran et al., 2007). This may account for the numerous, reported problems that are attributed to organisational levels being disconnected (Jackson, 2016). Many of these problems prevent strategy driving actions (top-down deliberate) and actions informing the development of on-going strategy (bottom-up emergent) (refer section 2.2). Literature Review 46 2.7.1.2 Behaviour Related Issues and Problems The industry and academic literature highlighted an array of problems related to the entrenchment of traditional top down deliberate approaches as well as the more recent bottom up emergent approaches to the management and control of organisations (Jackson, 2016). Many of these problems are symptomatic of the behaviors that support purely deliberate (top down) or emergent (bottom up) structures and management approaches (refer section 2.4). Furthermore, it is argued that these purely deliberate or emergent behaviours hinder the development of an AE. Specifically, information, learning, and control flows throughout the organisation, and between internal and external environments, do not support timely, informed, decision making (Siemens, 2005; Sujan & Furniss, 2015). Also, decision are not integrated, which manifests in ad hoc, ill informed, and delayed decisions at all levels of the organisation (Jackson, 2016). 2.7.1.3 System Perspective Related Issues and Problems A fundamental issue highlighted by a review of academic and industry literature, was the apparent lack of understanding that an AE is a complex, adaptive, cybernetic system and as such, requires management structures and control approaches that support an organisational system perspective (Senge, 2006). A system perspective is fundamental to an AE and is foundational in its design and development. It is easily deduced from the literature that almost all of the problems and issues associated with the development of an AE emanate from the absence of an overall systems approach (Haeckel, 1995, 2004, 2016). For example, there is a plethora of literature on the subject of non-integrated, non-aligned IS, and its failure to support visibility, collaboration, and communication, which ultimately leads to a myriad of problems for an organisation. Notable, in the context of AE, is the inability of IS to facilitate the alignment of strategy, with the structures and processes necessary to respond quickly to change (Chang, 2016; Weichhart et al., 2016). 2.7.1.4 Transformation Related Issues and Problems The transformational issues indicate that there is a lack of concepts and models to guide and facilitate the transition from a non-AE to an AE. Several relevant models exist and were Literature Review 47 reviewed in section 2.6. Although these transformation models exhibit some adaptive elements, they can be described as incipient models as each has shortcomings in terms of exhibiting a holistic, systems approach to the transformation of an enterprise. Furthermore, they do not demonstrate the transformation concept in a dynamic, cyclical way including the interactions of the model components and processes that would enable the transition to an AE (refer 2.6.3). The issues and problems mentioned above can be synthesised and summarised as three overarching research problems, which permit an aggregated view that clarifies the research requirements. The overarching research problems are a lack of: 1. Identification and coherent understanding of AE components, structures, processes and practices. 2. Identification and coherent understanding of adaptive approaches to the management and control of AE behaviours. 3. Identification and coherent understanding of a systems perspective, including the enabling factors, structures and processes, which support the transformation of an AE. Key Research Requirements The key requirements necessary to address the overarching research problems are insights, concepts, models and hypothesis that are part of a holistic systems approach that will inform and guide the development of an AE. To realise these requirements of insights, concepts, models, and hypothesis the research steps need to be sequential as the outcome of one step will inform the next and ultimately lead to the development of a series of research artefacts and results. The key research requirements are presented and discussed in the following paragraphs. 1. To address the lack of a coherent understanding of AE components, structures, processes and practices. Despite a plethora of AE related literature, much of it is influenced by narrow, silo based, uni-dimensional perspectives, so discussion about key AE components is vague. For example, it is evident that there is some agreement about key AE components but there is not much agreement about what these components are. There is some indication Literature Review 48 that critical components of AE are strategy, organisational structure, processes and IS (Peko, Dong, Sundaram 2014), but there is insufficient empirical evidence to suggest that these components are actually significant for the development of an AE. In addition, there are few comprehensive models that illustrate an AE in terms of its key components. Traditionally, enterprises have been structured and managed according to horizontal functional areas and vertical organisational levels. This type of structure has advantages, however, the main disadvantage is that the enterprise is limited in its ability to operate as an adaptive organisational system due to the vertical functional gaps, and horizontal management and control gaps (Jackson, 2016). For instance, functional objectives can be contradictory as while they optimise the individual parts (functions), they can suboptimise the overall organisational system and ultimately restrict the organisation’s ability to be adaptive. As while they optimise the individual parts (functions) they suboptimise the organisational system and so can ultimately restrict the organisation’s ability to be adaptive (Shapiro, Rangan & Sviokla, 1992). The ability to be adaptive needs to be inherent in the organisational structures, processes and systems. There are only a few models that demonstrate this. The paucity of literature suggests that this particular aspect of an AE is not widely understood as there seems to be a more myopic functional perspective rather than a comprehensive, adaptive, cybernetic perspective. 2. To address the lack of a coherent understanding of adaptive approaches to the management and control of AE behaviours. There is insufficient information in the literature to enable enterprises to translate adaptive visions into adaptive actions. While there is some discussion about using appropriate organisational structures to support emergent strategies (Mintzberg, 1983), there is very little about models, and best practice, for the seamless translation of adaptive strategies into adaptive organisational structures and processes with supporting adaptive IS. As well as these shortcomings, there is little information available to guide the practitioner in monitoring, analyzing and controlling this translation, and interweaving of adaptive strategy, organisational structure, processes and IS on an ongoing basis. There is some information about how processes can be supported by Literature Review 49 adaptive systems but there is very little information that considers or gives direction about how to manage and interweave the four components of strategy, organisational structure, processes, and IS. There are few models available to act as a guide for organisations to translate adaptive strategy into adaptive processes, develop a relevant organisational structure, and how these can be supported by an adaptive IS. 3. To address the lack of a coherent understanding of a systems perspective including the enabling factors, structures, and processes that support the transformation of an AE. It is rare to find best practices and comprehensive models to assist stagnant or chaotic enterprises (Scheer, 2007) to transform into an AE. While there is some literature and a few frameworks on business transformation that is discipline centric (Gollenia, 2016), there is very little understanding about the approaches, factors, mechanisms and interactions involved in the AE transformation process. The frameworks that are currently available tend to be either enterprise specific (Haeckel, 2004) or domain specific (Uhl & Gollenia, 2016). Summary of Practical Issues, Research Problems and Requirements Several practical issues and research problems associated with an AE emerged from the review of relevant literature. The issues and problems were synthesised into three overarching research problems and led to three key research requirements, relating to requisite concepts, models and hypotheses of an AE, being developed (refer Figure 2.24). These research requirements are interdependent in that by addressing one, it informs the development of the next, leading eventually to the envisioned research artefacts to enable the successful design and development of an AE. Chapter Summary This chapter provides a review of relevant academic and industry literature about AE, which encompasses several research domains. The development of an AE from a multi-disciplinary perspective is an unexplored area of research. This is particularly evident in the domain of Literature Review 50 information systems, where the focus is on the dimensions of people, processes, date/information and technologies, in order to achieve some purpose (Kroenke & Boyle, 2015). Figure 2.24: Overview of Practical Issues, Overarching Research Problems, and Key Research Requirements The first part of the chapter was dedicated to identifying the fundamental theories and concepts that illuminate the phenomenon of what is meant by an AE. Given the multidisciplinary nature of the research topic, the literature review first investigated important frameworks and models of an enterprise. This revealed the fundamental concepts and elements of an organisation, such as organisational strategy, structures, processes, technologies, and the organisation as systems within a system. The next part of the review explored the notion of what is meant by adaptive in the context of an adaptive organisational system. The literature indicates that Mintzberg’s seminal work on organisational strategy, and in particular the proposition that the dimensions of deliberate and emergent exist (Mintzberg & Waters 1985), Literature Review 51 was considered pivotal to the research about AE. At this juncture, the key organisation elements and the deliberate/emergent concept were combined and the literature about each of the elements was discussed from the deliberate, emergent and adaptive perspectives. This discussion highlighted the concepts and models that exist and how they could help in developing answers to the overarching questions of “What is an AE” and “How does an enterprise adapt”? The discussion also revealed a lacuna that exists in the research area and that current understanding of AE is inconsistent and under developed. This finding enabled the identification of practical issues, research problems and related requirements, and confirmed that there is a lack of concepts, models, and approaches to support the development and transformation of an AE. The issues, problems and requirements that determined the remainder of the research are described in the concluding section of the chapter. To address the practical issues and research problems, and achieve the five research objectives, it was decided that the use of an appropriate multi-methodological approach was required and will be delineated in the following chapter on research methodology. The reasons for using a multimethodological approach, which includes the research steps carried out in the exploratory and explanatory phases of the research, are discussed together with the expected research outcomes. Research Methodology 52 3 Research Methodology The purpose of this research is twofold: to gain an insight into the phenomenon of AE and to determine how an AE can transform and adapt to ensure survival in changing environments. The literature review in the preceding chapter indicates that there is a lacuna of cohesive research about AE transformation. The disparity of perspectives highlights a significant knowledge gap in the three prevailing research disciplines of Organisational Management, OM, and IS that presents the opportunity for in-depth research about AE. To address this multidisciplinary knowledge gap, it is proposed that exploratory research, followed by explanatory research, is conducted and facilitated by an appropriate multi-methodological approach to guide and evaluate the research process and results. This chapter describes the research approach to be used and explains its applicability to the study. The structure of the chapter is outlined in Figure 3.1. Figure 3.1: Structure of the Research Methodology Chapter Research Methodology 53 The practical issues and research problems identified from a review of the literature motivate the research questions that will be outlined in this chapter. The research questions help to define the objectives of the research with the purpose of obtaining the required information for creating a set of concrete research artefacts. These artefacts will be developed and validated from information gathered during the literature review, exploratory, and explanatory studies. In order to achieve the objectives, appropriate methodologies need to be adopted. A detailed description of the chosen multi-methodological approach will be evaluated and discussed and it is anticipated that this type of approach will enable the creation, validation, and refinement of the research artefacts. Research Questions The practical issues and research problems identified and explored in Chapter 2 can be synthesised into two overarching research questions: What is an Adaptive Enterprise? and How does an enterprise adapt? To be able to answer the research questions, it is necessary to first ascertain what an AE is from the available literature, and what components contribute to an enterprise being adaptive. The four components that have been identified to date are: strategy, organisational structures, BP and IS. Exploratory research followed by explanatory research will be conducted to obtain further insights about AE. These insights, together with the insights gleaned from the literature, will be used to create and validate structural, behavioural, and transformation models of an AE. In addition to identifying AE components, structures, and behaviours, the models will also demonstrate relationships among these particular elements. Hence, the research will attempt to answer the questions “what is an Adaptive Enterprise” and “how does and enterprise adapt”. The primary reason to use both exploratory and explanatory research is to first explore the area of AE in order to gain an in-depth understanding of the AE phenomena and create and validate research artefacts followed by the further validation and refinement of those artefacts to explain what was revealed about AE. Research Methodology 54 The research objectives, which are motivated and informed by the two overarching research questions, will be outlined next. Research Objectives There are two primary objectives of this research. The first objective is to conduct exploratory research in order to define, gain insight into, and conceptualise an AE by identifying inherent elements and characteristics, through a comprehensive review of the literature. This will be followed by an exploratory inquiry so that concepts and models can be created, validated and refined to elucidate the phenomena of an AE. The second objective is to conduct explanatory research so that the fundamental elements that influence the evolution of an AE can be explained. The purpose of the explanatory research is to formulate and test hypotheses so the various factors (elements and their interactions) of an AE can be explained and thus enable the further validation and refinement of empirically tested artefacts. It is intended that this will help to address the practical issues and research problems outlined in the previous chapter. The two primary research objectives that were delineated into six detailed objectives to guide the research are: Objective one: Identify and develop concepts of an AE. Objective two: Identify the key components of an AE. Objective three: Identify, design and develop structural concepts and models of an AE. Objective four: Identify, design and develop behavioural concepts and models of an AE. Objective five: Identify, design and develop transformational concepts and models of an AE. Objective six: Validate the developed concepts and models of an AE. It is anticipated that four, empirically tested, key types of research artefacts will be created from the realisation of the research objectives, these proposed artefacts are: • Key components models of an AE • Structural models of an AE Research Methodology 55 • Behavioural models of an AE • Transformational models of an AE The following sections discuss the research philosophy and the research methodology used to study the AE phenomena and create the research artefacts. It is expected that the evaluation, testing, and validation of the artefacts will further assist in achieving the six research objectives. Research Philosophy “Research is a process for collecting, analysing and interpreting information to answer questions” and this “process must have certain characteristics and fulfil some requirements: it must, as far as possible, be controlled, rigorous, systematic, valid and verifiable, empirical and critical” (Kumar, 2014, p. 10). Nunamaker, Chen and Purdin (1990) suggest that the method of research is “no more important than the research question” as “research methods are a means of finding truth in research domains” and “without an understanding of a research domain, researchers might ask a wrong question or formulate a meaningless hypothesis” (p. 632). Kumar (2014) explains that there are many ways of designing and conducting research and the research process consists of the stages of “What”, “How” and the “Conducting of the study” (p. 36). However, it is essential that the research design reflects the specific research objectives, and to be planned and conducted accordingly. Furthermore, the way the research is conducted needs to be considered in terms of the choice of the research philosophy and a compatible research methodology. In the following sub-sections, qualitative and quantitative methods (3.3.1), multi-methodological research (3.3.2), and the interpretive research paradigm (3.3.3) will be discussed and how these research methods, methodology, and paradigm can complement each other and support both exploratory and explanatory research (Mackenzie & Knipe, 2006). Qualitative and Quantitative Research Methods In this section, methods to support our exploratory and explanatory research studies are explored. In particular, the discussion focuses on qualitative research methods to support the exploration and quantitative methods to support the explanation. Both qualitative and Research Methodology 56 quantitative approaches are based on a set of philosophical assumptions. These philosophical assumptions underlie the two central paradigms in social science research, which are positivism and interpretivism (Collis & Hussey, 2013; Myers, 2013). The positivistic paradigm is generally associated with scientific, quantitative research and involves deductive reasoning, while an interpretivist paradigm is based on humanistic, qualitative research and involves inductive reasoning (Tull & Hawkins, 1990). In some situations, both qualitative and quantitative approaches are used (Collis & Hussey, 2013; Kumar, 2014) since the interplay between exploratory qualitative research and explanatory quantitative research has the potential to give greater depth and meaning to the findings as well as demonstrating academic rigour (refer Figure 3.2). Both approaches have their strengths and weaknesses and it is recommended that the approaches should be combined as “… both types of research are important for a good research study” (Kumar, 2014, p. 14). Figure 3.2: The Interplay between Exploratory Qualitative Research and Explanatory Quantitative Research Qualitative research is an open, unstructured and flexible method of inquiry as it aims to explore and describe feelings, experiences and perceptions rather than measuring them (Tull & Hawkins, 1990; Kumar, 2014). A key benefit of qualitative research is being able “…to see and understand the context within which decisions and actions take place.” (Myers, 2013, p. 5). Qualitative research allows findings to then be communicated “…in descriptive and narrative rather than analytical manner…” (Kumar, 2014, p. 14) as they are in quantitative research. Quantitative research involves the collection of data through the use of numbers, or numbers are attributed to gathered data. These numbers are then used to assess the gathered information. In this way the information, such as explaining phenomena, can then be analysed using statistical analysis or other mathematically based methods, and provides the opportunity Research Methodology 57 to dig deeper into the data and look for greater meaning (Aliaga & Gunderson, 2000). Quantitative data is particularly useful when conducting explanatory research that attempts to build, test, and elaborate on theories. It can be used to explain relationships, association or interdependence, and why a particular event occurs or a relationship formed (Kumar, 2014). This is done by using a scientific method to test the evidence, to extend an idea put forth, or used to explain new areas and issues, as well as new topics or concepts. Exploratory qualitative data can also be used in explanatory research as it can be quantified, which then permits the qualitative data to be validated and verified through the use of mathematical methods (Kumar, 2014). Kumar (2014) clarifies the terminology used in social science research through the classification of research from three perspectives: application of the findings (pure or applied research), objectives of the study (exploratory or explanatory) and method of inquiry (qualitative, quantitative, or multi-methods). Based on these discussions, the use of qualitative methods to support the exploratory objectives and quantitative methods to support the explanatory objectives are proposed. Given that the research has both exploratory and explanatory objectives, the multi-methodological mode of inquiry is reviewed next. Multi-Methodological Research In recent years the preference for systems practice research to be underpinned by a single methodology has been questioned and that, “Adopting a particular paradigm is like viewing the world through a particular instrument such as a telescope, an X-ray machine or an electron microscope.…Although they may be pointing at the same place, each instrument produces a totally different, and seemingly incompatible, representation. Thus, in adopting only one paradigm one is inevitably gaining only a limited view of the problem situation, for example, attending only to that which may be measured or quantified, or only to individual subjective meanings and understandings”. (Mingers & Brocklesby, 1997, p. 492-3). Given that the research objectives for this study are embedded in inquiry in the disciplines of Management (strategy and organisational structures), OM (business processes) and IS, the choice of a research methodology is challenging because, as Mingers and Brocklesby (1997) argue, “…it is always wise to utilize a variety of paradigms” (p. 493). This is a strong argument Research Methodology 58 that supports the use of multi-methodology research. Some studies have successfully used a multi-methodological approach suited for inter-disciplinary research in similarly combined research areas (Guba & Lincoln, 1994; Hevner, March, Park & Ram, 2004). Such an approach has been used successfully in the areas of General Management, OM, Management Science, and IS (Mingers & Brocklesby, 1997; Meredith et al. 1998; Mingers, 2001; Myers, 1997, 2013). In the IS field, multi-methodology research is frequently used (Venkatesh et al., 2013) and is advocated by Nunamaker et al. (1990, p. 95) who state: “No one methodology should be regarded as the preeminent research paradigm, because no one research methodology is sufficient by itself. In general, where multiple methodologies are applicable they appear to be complementary, providing valuable feedback to one another.” Taking heed of these views and the inter-disciplinary nature of this research it is appropriate that a pluralist research approach is adopted. While qualitative, quantitative and multimethodological research have been discussed in general, the following section will focus on interpretive research in IS and outline the motivation for adopting an interpretivist approach for this research as a whole. Interpretive Research in Information Systems “Interpretive researchers do not predefine dependent and independent variables, but focus instead on the complexity of human sense making as the situation emerges” (Kaplan & Maxwell, 1994 cited in Myers, 1997, 2013, p. 39). This approach assumes that access to reality can only be through social constructs where language, shared meanings, and consciousness can provide a better understanding of the phenomena being examined (Myers, 1997, 2013). Interpretive research can help identify the context of systems and also the process by which the systems influence, and are influenced by, their context. Context can be explained as the identification of multi-level systems and structures embedded in both the organisation and its environment. These systems and structures include the organisation as a whole, as well as the social structures that are present in the minds of all involved participants who are both the internal and external stakeholders. Participants draw on elements of context, such as available Research Methodology 59 resources and perceived authority, which then help determine their actions, with those actions then reinforcing the existing systems, or to create new systems (Walsham, 1993). Leading researchers in IS suggest that interpretive research should be used more often because the more widely used positivistic paradigm is limiting (Myers, 1997, 2013) as the traditional quantitative research paradigms have shortcomings when they are applied to a modern business setting. Meredith et al., 1989 states: “We emphasise objectivity in research and thus stress the predictive power, but with little understanding of the phenomenon” (p. 320). Therefore, interpretive research methodologies are better suited to certain business situations and can address some of the difficulties that researchers encounter, one of which is the ability to truly understand the phenomenon that is being studied (Myers, 1997, 2013). Klein & Myers, (1999) mention that our knowledge of IS can be enhanced by the discovery of unstructured ‘real world’ conditions through the use of “grounded interpretive research methods” (p. 67). Adams and Courtney (2004) concur and claim that IS research “should be both theoretically based and relevant to practice” (p. 1). It has been said that emergent researchers are often reluctant to take the risks associated with innovative topics and research methodologies (Pannirselvam, Ferguson, Ash & Siferd, 1999). However, this observation was several years ago and since then interpretive research has gained ground (Myers, 2013). Despite this, to date, there is a paucity of literature that indicates innovation using interpretive research combined with an inter-disciplinary, multi-method approach. As the AE context is complex, integrated, and situation dependent it is necessary to explore and achieve an in-depth and contextual understanding of AE. The choice of the research methodology, therefore, needs to be comprehensive and inclusive (Sanders et al., 2016) as the research objectives involve inquiry across multiple disciplines. To improve rigour of this study, after considering the advice of leading IS authors and others in social science (Nunamaker et al., 1990; Hevner et al., 2004, Myers, 2013; Kumar, 2014; Sanders et al., 2016), a multi-disciplinary, interpretivist approach will be used to explore the phenomenon of Figure 3.3 examination of unstructured, ongoing, real world business situations will be made. The study will use a multi-method, which is qualitative and quantitative, with Research Methodology 60 an interpretivist lens applied to understand the results from a holistic perspective. Specifically, the literature review will inform the qualitative research, which in turn will inform the quantitative research, which is also informed by the literature review and the results of each will be interpreted in conjunction with each other. This interplay among the multiple research methods, using an overall interpretivist approach to achieve both the exploratory and explanatory research endeavours is depicted in Figure 3.3. In addition to the adopted research approach, relevant research frameworks will be chosen and adapted to guide this study and develop responses to the issues and research problems identified in Chapter Two (refer section 2.7). In the following section, multi-methodological research frameworks that potentially could support the adopted multi-methodological research approach are reviewed. Figure 3.3: The Multi-methodological Research Roadmap Multi-methodological Research Frameworks There are many research frameworks available for business and management researchers to choose from and the modern research perspective is to consider the interrelationships of business operations. It is about discovering something new through crossing boundaries and investigating more than one discipline. Inter-disciplinary research is often lauded for contributing to scientific breakthroughs and for fostering innovation (Gibbons & Birkinshaw, 1994, Sanders et al., 2016) as well as encouraging creativity (Heinze, et al., 2009). This type of “research is based upon a conceptual model that links or integrates theoretical frameworks Research Methodology 61 from” two or more distinct disciplines and “uses study design and methodology that is not limited to any one field…” (Aboelela et. al., 2007, p. 341). The multi-methodological approach for conducting IS research has become a popular research approach (Nunamaker et al., 1990). The authors explain that research can be viewed as following a pattern of “problem, hypothesis, analysis, argument” (p. 91). A problem is identified, a hypothesis is devised regarding it, and then an attempt is made to substantiate and generalise the hypothesis through analysis. For example, the analysis could be formal proofs, surveys, experiments and developed systems that result in evidence to support the hypothesis. To gather the evidence, methodologies need to be complementary, integrated, multi-dimensional and multi-methodological to generate effective research results. According to the approach advocated by Nunamaker et al., (1990) theory building supports, and is supported by, systems development. It acts as a discoverer as well as the outcome of theory building, experimentation and observation. The approach integrates four complementary research methodologies in a recursive research cycle that is facilitated by the systems development as shown in Figure 3.4. Figure 3.4: A Multi-Methodological Approach to IS Research (Nunamaker et al., 1990) Research Methodology 62 The four research methodologies are: Observation: Includes case studies, field studies and surveys. Observation can be used to formulate hypotheses to be tested. Theory Building: Includes development of concepts, models, and theory and acts as the foundation for the design of experiments or to conduct observations. Validation: Includes field or lab experiments, computer simulations, surveys, and semistructured interviews. Findings of the validation techniques can be used to refine theory and improve the system artefacts being developed. Systems Development: Includes conceptual design, construction of systems architecture, prototype development, and technology transfer. It can be used as proofof–concept to demonstrate viability and also the theory can be refined in response to difficulties/constraints experienced in development of a system. Another often used and relevant IS research framework is proposed by Hevner et al., is depicted in Figure 3.5. Figure 3.5: A Multi-Methodological Approach to IS Research (Hevner et al., 2004) Research Methodology 63 The framework illustrates interplay between the ‘Develop/Build’ and ‘Justify/Evaluate’ research activities that represent IS ‘Research’ and provides artefacts for “application in the appropriate environment” (results of Build and Evaluate) and “additions to the knowledge base” (results of Develop and Justify) (Hevner et al., 2004). By understanding the current ‘Environment’, a researcher is able to identify problems, needs, opportunities and requirements associated with people, organisations and technology and so reveal their relevance and requirements, which are the ‘Business Needs’ of the research. The ‘Knowledge Base’ provides the theoretical foundations and methodologies that underpin the study. The ‘Knowledge Base’ and IS ‘Research’ interact, allowing the ‘Knowledge Base’ to inform the development of artefacts and evaluation of theories. Hevner et al. (2004) posits that the initial evaluation of the business needs will ensure the research goal of relevance is met and the ‘Business Need’ is “assessed and evaluated within the context of organisational strategy, structures, culture and existing business processes” (p. 79). So, to truly understand an AE these components must first be determined for the resulting concepts, models and hypotheses to be applicable. Adapted Research Framework Although the Nunamaker et al. (1990) and Hevner et al. (2004) frameworks were originally proposed for IS research, they can also be applied to different research areas to generate new knowledge (Baskerville, Kaul & Storey, 2015). Their relevance to the research about AE makes them an appropriate multi-methodological choice. Thus, they have been adopted as generic frameworks and then adapted for this research (refer Figure 3.6). The multi-methodology cyclical approach, as illustrated in Figure 3.6 begins with observation of the phenomena through first studying the literature and other relevant means so that theories can be created, explored and further developed. The observation phase of the cycle is followed by the theory building phase that seeks to create the theory and then refine it. The refined theory is then validated through the use of an appropriate research instrument. The following discussion outlines in detail the application of the generic research approach that is adapted for this study. Research Methodology 64 Figure 3.6: Multi-Methodological Cyclical Approach of the AE Research (adapted from Nunamaker et al., 1990, and Hevner et al., 2004) Application of Research Philosophy and Framework The AE context is complex, integrated, and situation dependent, as previously mentioned, so it is necessary to explore and achieve an in-depth, contextual understanding of AE. Since the investigation of AE involves inquiry across multiple disciplines the choice of research methodology needs to be comprehensive and inclusive (Sanders et al., 2016). Hence, a decision was made: • to conduct complementary exploratory followed by explanatory studies to support our research objectives; • to adopt an overall interpretivist lens; • to adopt a multi-methodological research framework to guide the studies; • to use both qualitative and quantitative investigations using appropriate methods of inquiry. Some of the different methods of inquiry used to collect data are: focus groups, interviews, Delphi method, case study, netnography, experiments, and surveys. All of these methods were considered and, in order to use an exploratory and interpretive methodology of inquiry, the qualitative Delphi method is chosen. Using this method permits clarification and verification of the gathered information as each of the Delphi rounds progress (Collis & Hussey, 2013). A Research Methodology 65 survey is considered as the most appropriate method of inquiry for the explanatory research phase because it allows for a quantitative description about aspects of the study (Fowler, 2014). The sub-sections that follow describe the application of the adapted research framework (Figure 3.6), which includes the three principal methods of inquiry used in this research, namely, the literature review, Delphi study and survey. The Research Cycles As explained in section 3.4, the multi-methodological research cycle will begin with observation of the AE phenomena, which will be followed by theory building based on that observation. Then, the proposed new theory will be validated, which will simultaneously complete the first research cycle while also initiating the second using the validated results. The second research cycle will build upon these validated results and through more observation, theory building, and validation further develop and refine the research outputs. These research outputs, in turn, will become the input for the third and final research cycle that will conclude with theory building. The observation, theory building, and validation phases of the first two research cycles will comprise of two independent exploratory studies that will be carried out in parallel. One study is a comprehensive review of the AE literature while the other is a study using the Delphi method of inquiry. In research cycle one, the output from the literature review and Delphi study will be used to create and validate concepts and models of an AE. To test these research artefacts, generated by the exploratory research, hypotheses will be formulated in research cycle two and together with the concepts and models, form the input for the next research cycle three. Finally, in research cycle three the concepts and models will be further validated and refined. The execution of the three research cycles, including the associated modes of inquiry and the anticipated research outputs from each round, are illustrated in Figure 3.7. Furthermore, the applicability of the modes of inquiry (the literature review, Delphi study, and survey) and the ways in which they will achieve the research objectives are explained in the following sections. Research Methodology 66 Literature Review As mentioned, the exploratory aspect associated with the observation, theory building and validation phases, is the literature review that identified both academic and industry contributions. The literature review focused on four relevant research areas. First, relevant literature about organisational management was discussed with a comprehensive review of the development of organisational strategy. Figure 3.7: The Three Research Cycles of Observation, Theory Building, and Validation Research Methodology 67 This review also included literature about organisational structures and focused on those structures that support implementation of an organisation’s strategy. Second, relevant academic and industry OM literature was discussed that emphasised the development of BP from a management discipline perspective. Again, the third part of the literature review consisted of academic and industry literature, however, the subject domain was IS with a focus on adaptive IS rather than general IS. The fourth, and last part of the review, explored interdisciplinary, organisational transformation literature and, in particular, theories and models that enable an organisation to sense, interpret, and respond to the environment in order to evolve. The information obtained from the review of literature about organisational strategy, structures, BP, and IS will be used to create and validate the concepts and models of an AE. These concepts and models will enable the formulation of hypotheses, which together, will be the input for the explanatory research that aims to further validate and refine the research artefacts. Delphi Study The other component of the first two research cycles is a multi-round Delphi study. The study will engage with academics, industry senior managers, and business consultants, all of whom are experts in several of the areas of change management, process reengineering, adaptive enterprises, and adaptive systems. Delphi studies are adapted and used to seek answers to various complex questions in many contexts (Taylor-Powell, 2002; Keeney et al., 2006; Hasson & Keeney, 2011) as the Delphi allows opinions from a variety of sources to be collected and combined in a structured way (Seuring & Muller, 2008). Linstone and Turoff (1975, 2002, 2011) state, “when viewed as a communication process, there are few areas of human endeavour which are not candidates for the application of Delphi” (p. 3). So while the Delphi method has been used in several areas it has been used successfully, for “putting together a structure of a model.” (Linstone & Turoff 2002, p. 4), which is important for this study. Linstone and Turoff (2002) explain that at least one or more of seven application properties should apply if a Delphi is to be used. Acting on this advice, and applying the properties to this research, the following five apply: a) By its very nature, the problem is not an analytical problem and will “benefit Research Methodology 68 from subjective judgement on a collective basis (p. 4).” b) The experts are from diverse backgrounds in terms of experience and expertise. c) Frequent group meetings are not feasible because the study has limited resources in terms of time and cost. d) Group (online) rather than one on one meeting is a more efficient communication process. e) The participants are not unduly influenced by the opinions of others. Linstone & Turoff (1975, 2002, 2011) also suggest that it is important to identify the type of knowledge required by the experts that are selected for the panel. In this study, a total of 30 - 40 experts from four different groups will be chosen based on their knowledge of AE. As mentioned, the expert groups will be: academics, pracademics, senior managers, and management consultants. Judgement sampling will be used to select the participants for the Delphi study. Judgement sampling is when “… the participants are selected by the researcher on the strength of their experience of the phenomenon under study.” (Collis & Hussey, 2013, p. 213). For this research it is judged as being appropriate to obtain a mixed sample from tertiary institutions, consulting practices, and industry. Round One of the Delphi will be used for brainstorming to gain insight from the experts about what an AE is, and how it adapts. The collected data from this round will allow for a descriptive construct of AE to be formed. From this, concepts will be developed and AE models created which will be the research artefacts. These research artefacts will then be evaluated in the subsequent rounds of the Delphi. The results of these rounds will enable both further development and validation of the concepts and models which will clarify what an AE is and how an enterprise adapts. Essentially, general ideas about an AE will be gathered in Round One and clarified, validated and refined in the subsequent Delphi rounds. The research output from the multi-round Delphi study will become a series of AE concepts and AE models, which will be supplemented by the concepts and models gleaned from a review of the relevant literature. This improved understanding of an AE provides the foundation for the next stage of the study which is explanatory and quantitative. At this stage, hypotheses will be formulated and then tested using quantitative data gathered through the proposed survey. The results will be analysed to validate and further refine the AE concepts, models and hypotheses, as illustrated in Figure 3.7. Research Methodology 69 Survey There are two main categories of survey research, descriptive survey research and explanatory, confirmatory (theory testing) research (Pinsonneault & Kraemer, 1993; Malhotra & Grover, 1998; Forza, 2002; Nardi 2015). Researchers Pinsonneault & Kraemer (1993) posit that the aim of explanatory survey research is to “test theory and causal relationships (p. 80)”, which is supported by Forza (2002) who states “Confirmatory (or theory testing or explanatory) survey research takes place when knowledge of a phenomenon has been articulated in a theoretical form using well defined concepts, models and propositions” (p. 155). An important aspect when conducting survey research is that the sample frame is a good representation of the population for the study. Also, the individual respondents who are selected should be representative of the sample frame (Pinsonneault & Kraemer, 1993; Malhotra & Grover, 1998; Forza, 2002). To adhere to these survey requirements, a survey will be administered online to a sample of individuals who have knowledge about AE, and work at different organisational levels for a diverse range of business organisations, as this will be a good representation of the population. There are several compelling reasons to use an online/web-based format for the survey. The literature suggests that these types of surveys are less expensive than, for example, postal surveys and they will deliver faster, more complete and more accurate responses (Dillman, 2007; Tourangeau et al., 2013; Dillman, Smyth & Christian, 2014). In addition, online/web based surveys offer other unique advantages such as real time response validation, automated data entry and programmable, context sensitive skip patterns. Given these advantages (Lowry, et al., 2016), the survey will be administered online using an online survey platform to a minimum of 500 participants. Once the survey data has been collected it will be downloaded from the website and prepared for comprehensive statistical analysis. A factor analysis will be performed that includes an Exploratory Factor Analysis (EFA), followed by a Confirmatory Factor Analysis (CFA), and lastly a series of Structural Equation Models (SEM) will be used to test the hypotheses, and validate and refine the concepts and models (refer Figure 3.7). Research Methodology 70 Refined Research Artefacts Through the course of the research it is anticipated that a number of research artefacts, including key types of models of an AE, will be created to achieve the stated objectives of the research. In particular, the first type of model will consist of the components of an AE and will be developed mainly from the literature review. The second type of model to be created will illustrate the structures of an AE, while model type three will conceptualise the behaviours of an AE. The fourth, and final type of model, will be a synthesis of several concepts and models and will demonstrate the transformation of an enterprise into an AE. All but the first type of model will be conceived in the early rounds of the multi-round Delphi. The latter rounds of the Delphi will be used to validate these models, as well as the first type of model. The research conducted for the exploratory phase will, to some extent, enable the formulation of hypotheses that will be tested through a quantitative survey, which will form the explanatory phase of the research. This phase will permit the validation and refinement of the concepts, models and hypotheses generated by the overall AE research. Summary This chapter initially outlined the purpose of the research, which then motivated the research questions and development of the research objectives, along with anticipated research artefacts. This was followed by a discussion about the research philosophy and proposed multimethodological approach that will be used to achieve the research objectives. A key component of this multi-methodological approach adopted for the research is the adapted research framework, which is based on two often used and well accepted IS generic frameworks as advocated by Nunamaker et al. (1990) and Hevner et al. (2004). The adapted framework will be used to guide execution of the research, and comprises of three interrelated phases that are: Observation, Theory Building, and Validation. Each phase was explained in terms of how it applies to the exploratory and explanatory research studies. The first and second research cycles are described in regard to the qualitative literature review and Delphi study, while the third cycle is the quantitative survey that will be conducted to test the hypotheses and further validate and refine the research artefacts. Exploratory Delphi 71 4 Exploratory Delphi The purpose of this research was to explore and then explain the concept of an AE in terms of, “what is an Adaptive Enterprise?” and “how does an enterprise adapt?” To achieve this purpose a Delphi study was chosen as the first of two research methods for the inquiry. This chapter will discuss the Delphi method that was used to a) gather the data b) analyse the data c) create research artefacts namely: concepts and models of an AE d) refine research artefacts and e) validate the research artefacts. A multi-round Delphi study was administered to create, refine and validate several research artefacts namely, concepts and models. This was done through questions that elicited information from expert participants who took part in each round so that concepts and models could be formulated, and tested in the following rounds. For this study, consensus was achieved after three rounds and the Delphi study was terminated. The Delphi method has been successfully applied to a wide variety of situations. It is frequently used for the development of concepts and models (Okoli & Palowski, 2004; Linstone & Turoff, 1975, 2002, 2011) especially when practical limitations make other potentially suitable research methods infeasible (Linstone & Turoff, 1975, 2002, 2011). It is primarily employed when the problem does not lend itself to precise analytical techniques and can benefit in research situations where judgmental information is indispensable (Yousuf, 2007). Therefore, gathering expert judgement through a Delphi study was considered an appropriate option to explore the multi-disciplinary topic of AE. Detail of the methodological reasons for adopting the Delphi technique were explained in section 3.5.2 along with an explanation of the design considerations in section 4.1. Together, these features guided the development, administration, and analysis of this exploratory Delphi research, which is reflected in the structure of the Delphi study depicted in Figure 4.1. This sequential research structure outlines both the implementation steps of the study and layout of this chapter. Section 4.1 provides an overview of the relevant issues that influenced the identification and selection of the expert panel. Then the interconnected Delphi rounds are discussed in the ensuing three sections. Round One (section 4.2) was initiated through a series of open-ended Exploratory Delphi 72 questions that aimed to discover the relevant factors of an AE. It also includes a thematic analysis of the question responses and an account of the preliminary development of the proposed concepts and models. Round Two (section 4.3), details the activities to facilitate rating of the identified AE factors and evaluation of the proposed concepts and models. Round Three (section 4.4) describes final round of the Delphi and focuses on the rerating of the AE factors as well as the termination of the Delphi study. It is important to note the final form of the Delphi adhered to the suggestions and advice reviewed in section 4.4.3.1. A minimum of three rounds and maximum of four was considered appropriate. Subsequently, an overall agreement was achieved after the third round that triggered the termination of the study. Section 4.3.4 presents the quantitative and qualitative results of the Delphi study that informed and validated the AE models, which in turn informed the hypotheses and AE survey. A comprehensive review of the overall results of the Delphi study, including the identified AE factors together with the importance ratings results, is included in the discussion in sections 6.1 - 6.7. Figure 4.1: Structure of the Exploratory Delphi Chapter Exploratory Delphi 73 Expert Selection The Delphi method was used in this research for a rigorous query of a well-balanced panel of experts who are referred to as expert participants and panellists. Hence, the suitability of the panellists, which is defined by the nature of the research question (Delbecq et al., 1975), is crucial to the success of the Delphi method. Moreover, the characteristics that denote the suitability of panellists should be clarified so their relevance and reliability can be appraised (Schmidt, 1997). In this section the main features of the Delphi panel will be discussed such as size, the inclusion criteria and selection process. The theoretical support for this selection process is discussed in section 4.1.4. Panel Characteristics The success of a Delphi study clearly depends on the collective knowledge of the panel of experts. A panel in a Delphi study is a group of individuals, referred to as experts, who have knowledge of the topic being explored. Hasson, Keeney, and McKenna (2000, p.1010) defines this group as “a panel of informed individuals; hence the title of experts being applied”. Therefore, selecting the experts and establishing an appropriate Delphi panel is a critical requirement of the method (Baker et al., 2006; Hsu & Sandford, 2007). Two key aspects define an appropriate panel: panel size, and qualifications of the experts (Powell, 2003; Okoli & Pawlowski, 2004; Delbecq et al., 1975). Although there are accepted recommendations in the Delphi literature on the size of the panel, there is less with regard to the qualifications of the panellist (Hsu & Sandford, 2007). It has been suggested that heterogeneous panels, in terms of experts from varied backgrounds and significantly different perspectives on the subject of inquiry, generate more high quality, highly acceptable outcomes than homogeneous panels (Delbecq et al., 1975; Rowe, 1994; Baker et al., 2006). Okoli and Pawlowski (2004), also state that, “A Delphi study does not depend on a statistical sample that attempts to be representative of any population” (p. 20). Rather, the representativeness of the panel is essentially based on the judgement and discretion of the researcher (Hsu & Sandford, 2007) guided by a sound nomination process (Ludwig, 1997). Exploratory Delphi 74 Given the suggestions above and the multi-disciplinary nature of AE, a larger panel was determined to be appropriate to gain a representative amalgamation of ideas (Baker et al., 2006; Hsu & Sandford, 2007). Also, the desire was to gather a comprehensive collection of insights from a wide range of perspectives regarding factors that may be significant in the elucidation of AE. This requirement was also a major factor in establishing the prerequisites for the nomination and selection of panellists. Regarding studies in IS, many have used the Delphi method to develop concepts and models (Okoli & Pawlowski, 2004). In these studies panellists were selected from one or a number of expert groups such as academics, senior managers, and practitioners, who had a deep understanding of the issue. Okoli and Pawlowski (2004) for instance, selected from four relevant categories of experts to gain a comparison of perspective from different interested groups. A similar justification can be given for this Delphi, which also seeks a wide range of perspectives. It supports the selection of academic experts of different disciplines and practice experts from diverse industries such as management consultants and senior managers. Size of Expert Panel The number of panellist for a Delphi Study will vary according to the research requirements (Delbecq et al., 1975). For instance, Delbecq et al. (1975) recommends using the minimum number of panellists and suggests no more than 15 participants whereas Ludwig (1997) refers to an average number of below 50 with most being between 15-20 panellists. Potentially, more expert participants could provide more knowledge but a greater number may inhibit the consensus process (Ludwig, 1997). Rowe, Wright and Bolger (1991) suggest that the suitability of the panellists outweighs a larger panel size given that the Delphi method requires well informed participants. It appears that opinion varies on the ideal number of participants for a Delphi study and it should be dependent on the objectives of the study and the homogeneity of the panellists (Baker et al., 2006). Therefore, an ideal number of experts had to be identified for this study which sought insights from academics as well as practitioners. Given the recommendations above and taking into account the practical constraints of the research such as its scope, a panel size Exploratory Delphi 75 of approximately 15 academics and 15 practitioners was deemed to be ideal. A balance was sought in terms of variety of perspectives and various industry and/or academic experiences. Expert Inclusion Criteria The quality of the participants directly impacts the validity of the Delphi method results, hence a representative sample is crucial (Hill & Fowles, 1975; Rowe et. al., 1991). Selection methods vary however, factors such as publication record, organisational position and knowledge gained from experience have been proposed as factors worth considering (Hsu & Sandford, 2007). Also, the selected experts need to be knowledgeable in the specific area under investigation which will mitigate potential loss of interest and tendencies to be unduly influenced by group opinion (Hill & Fowles, 1975; Rowe et al. 1991; Skulmoski et al., 2007). The identification of the inclusion criteria by which the potential participants were matched in this study was based on the method advocated by Hsu and Sanford (2007) and Williams and Web (1994). Also, the subject under investigation, namely AE, was taken into account and the following inclusion criteria were used: 1. Have a work history of academic and/or industry practice and have experience in organisations through: a. Employment as a practitioner for a minimum of 10 years and has progressed to a senior management level (entrepreneur). b. Employment as an academic for a minimum of 10 years and publishing in research areas associated with the management of organisations. c. Have related academic and/or management experience in areas such as: organisational development, change management, and IS. d. Have worked in a variety of industries and industry settings. e. Have developed a broad systems view. 2. Established a continuing professional interest in the topic of AE (Hasson et al. 2000). 3. Capacity, willingness and sufficient time to participate (Skulmoski et al., 2007). The development of the criteria was guided by a desire to engage a variety of experts who would inform each other through the provision of a broad range of information and contrasting insights facilitated by the Delphi method (Rowe & Wright, 2001). They also Exploratory Delphi 76 aligned with the desirable selection factors suggested by Hsu & Sandford (2007). Specifically, inclusion criteria 1 and 2 ensured that potential experts had the requisite knowledge while criteria 3 was included to mitigate the risk of respondent drop-off. In addition, the responsibilities of being included in the study were clearly articulated to potential participants together with the time commitment and value of the research goals. Further, past experience with implementing the Delphi method indicated that a welldesigned, succinct Delphi minimised respondent fatigue and thus significantly reduced the drop-off rate. Invitation and Selection Initially, potential expert participants were suggested by the researcher’s academic colleagues and other associates who worked in industry including those personally known to the researcher. These were contacted by either telephone or email to gauge their interest in the topic and establish if they would be willing to participate in the research. They were also asked to provide the name(s) and contact details of others who are expert in their knowledge of AE as a means of augmenting the initial sample. The building up of the number of participants is known as snowball sampling where the sample is selected using networks. “This method of selecting a sample is useful for …. diffusion of knowledge within a group” (Kumar, 1996, p162). To start with a few participants are selected and then asked to identify other people who are then selected to “become a part of the sample” (Kumar, 1996, p162). This process was continued until 40 experts had been identified. The participants were chosen keeping in mind their motivation to complete all rounds of the Delphi because of their various interests related to research effort (Hsu & Sandford, 2007). Online research was then carried out to further verify and establish the relevance of each expert to the study. This relevance was based on a number of critical factors (Hasson et al., 2000; Hsu & Sandford, 2007; Skulmoski et al., 2007) and their expertise and their knowledge about AE. As well as the critical factors, there were practical issues to be considered in choosing who was to be included in the sample. Some of these issues were how to identify those who had knowledge about AE, how they could be contacted, and how they could be encouraged to participate in the Delphi study. These issues were dealt with through online Exploratory Delphi 77 research, recommendations, and the researcher’s networks. This resulted in 40 academic and industry experts being identified as relevant to the study and invited to take part in the Delphi. An email invitation was sent to them explaining why they had been selected, the purpose and objectives of the study, the time commitment and their anticipated contributions together with other research ethics considerations. It also included a Participant Information Sheet as an attachment (refer Appendix A2) and provided the online link to the questionnaire for their information. Many of the targeted experts accepted the invitations however a reminder email was sent to those who had not responded. Eventually, a group of 36 participants was confirmed that balanced academic with industry experts who fitted the inclusion criteria. For comparative reasons, these experts were categorised and assigned to one of two primary panels: 20 academic panellists, and 16 industry panellists. Furthermore, these two primary panels could be subdivided into four sub-panels of experts i.e. 12 academics, 8 pracademics, 10 senior managers, and 6 management consultants. Round One: Identification of AE Factors and Models This section describes the design, administration, and analysis procedures employed for the first of the three round Delphi study. Round One was equivalent to an exploratory brain storming session in which participants were given the opportunity and encouraged to put forward their thoughts and ideas. However, some structure was provided in the form of open-ended questions that followed broad themes based on theoretical underpinnings that had emerged from the literature review. The process for Round One comprised of sequential activities starting with the initial design of the questionnaire, which was then tested, refined, and finalised through pilot testing before being administered to the experts. The process ended with analyses of the responses using thematic analysis and cognitive mapping that enabled the initial identification of the elements, characteristics and other significant aspects of an AE (Chapter 2, Literature Review). This output from Round One became the basis of the input for the second round of the Delphi study that is described in section 4.3. The design, administration, and analysis Exploratory Delphi 78 procedures employed for Round One of the Delphi study are discussed in the following subsections. Design and Development of Questionnaire for Round One The process began with the initial design of a questionnaire. The purpose of the first round is to discover a wide range of AE aspects and is typically an open-ended questionnaire. It serves as the foundation for soliciting information about the content area from the expert participants (Hasson & Keeney, 2000; Hsu & Sandford, 2007). The intention of using an open question design was to give the participants freedom to provide whatever information they considered relevant. Reja, Manfreda, Hlebec and Vehovar (2003) state that there are two reasons for using open ended questions. “One is to discover the responses that individuals give spontaneously; the other is to avoid the bias that may result from suggesting responses...” (p 159). For this research a broad range of responses were desired and subsequently elicited that validated the authors’ claims. However, it was also recognised that open-ended questions can be troublesome for participants and used sparingly. It is recommend that open-ended questions be used only when the constraints of closed-ended questions outweighs their use Rea and Parker (2014). Consequently, careful thought was given to the construction of the Round One questions to ensure they were simply expressed yet definitive (Hasson & Keeney, 2000; Rowe & Wright, 2001) to enhance their appeal to participants. In addition, a generally understood analogy was used in order to define and clarify the key terms and ideas being referred to in the questions. Response time was also a critical consideration because a lengthy questionnaire could discourage further participation in the study. The questionnaire was then used as a pilot for testing and further refinement. 4.2.1.1 Pilot Test Questionnaire Pilot testing is considered to be a critical factor in good Delphi research design (Gordon, 2003; Novakowski & Wellar, 2008). Keeney, Hasson and McKenna (2006) note that, “As with all good surveys, pilot testing with a small group of individuals should precede implementation” p. 1010. It gives the researcher an opportunity to test the questionnaire, Exploratory Delphi 79 including the question design, and gain feedback on improvements from the test participants, which improves the rigour through content validity. Hansson and Keeney (2011) argue that to ensure content validity an instrument has to be assessed so that it “provides adequate coverage of a topic under investigation.” p. 1695. For this research, a two phase pilot test was conducted for the first round questionnaire. It was completed by seven chosen experts, four of whom were academics and three who were practitioners. The main reasons for choosing these participants were their availability for in person, one-onone feedback and their experience with AE and/or questionnaire design. As mentioned, the pilot test consisted of a two phase, iterative process. The first phase was completed by four of the expert participants and the feedback was considered for inclusion in the questionnaire. The then modified questionnaire was used for the second phase of the pilot that also involved one-to-one feedback. The expert participants were asked to give feedback on the questionnaire and how it could be improved while they were completing it and soon after. Their comments were in the form of a verbal narrative that was captured by the researcher. They were free to give feedback on any aspect of the questionnaire and in particular they were asked to comment on the following: • Purpose: do you think the questionnaire will achieve the desired outcomes? • Form: are the questions well-articulated? • Understanding: are the instructions and explanations clear? • Ease of use: are there any problems using the questionnaire? • Are there any important details missing? • Is there anything that should be removed? • Is the time-to-complete estimate realistic? In the second phase of the pilot test the questionnaire was completed by the three remaining experts. Once again, the feedback was considered and the questionnaire modified. The pilot testing process concluded with the following improvements being made to the Delphi Round One questionnaire. Exploratory Delphi 80 • Modification to the objectives of the questionnaire to indicate the content and order of the questions. • Addition of a descriptive analogy to clarify the meaning of the key words in certain questions. • Rephrasing of questions to eliminate any ambiguity and ensure they yielded meaningful responses. • Elimination of grammatical and other errors. The revised questionnaire was finalised and then used for Round One of the Delphi study. 4.2.1.2 Final Design for Round One The output from the pilot test process described above was the finalised Round One Delphi questionnaire (refer Appendix A). The following discussion will highlight and describe the important design aspects of this questionnaire, which consisted of three main sections: 1) Introduction, 2) Adaptive Enterprise and 3) General information. The first section of the questionnaire, titled ‘Introduction’, outlined the objectives of the study (refer below) and informed the experts about the length of time it would take to complete. It also mentioned that in order to participate, consent had to be given and that the results of the Delphi would be made available upon request. An important feature of this section was the Round One objectives that were designed to a) highlight to focus of the questionnaire and desired outcomes and b) reflect the question flow: 1. To understand how an enterprise adapts. 2. Identification of the key characteristics of an adaptive enterprise. 3. Identification of the core elements that enable an enterprise to be adaptive. Section two of the questionnaire, titled ‘Adaptive Enterprise’ (refer below), comprised of the four open ended questions that were significant to the AE concept and an invitation to contribute observations: 1. In your opinion how does an enterprise adapt to its environment? 2. What are the key CHARACTERISTICS of an adaptive enterprise? 3. What are the core ELEMENTS that enable an enterprise to be adaptive? Exploratory Delphi 81 4. How do these core ELEMENTS work together to enable an enterprise to be adaptive? 5. Please feel free to comment on any aspect of adaptive enterprises that you would like to mention. One of the main design features of this section was the inclusion of a descriptive analogy (refer below) after the first question in order to clarify and define the key words in subsequent questions and therefore minimising potential ambiguity. “We would like you to identify and comment on the CHARACTERISTICS and ELEMENTS that enable an enterprise to adapt. To clarify the distinction between characteristics and elements, let’s consider warmth in a house. Warmth is a CHARACTERISTIC of the house but that warmth is enabled by various ELEMENTS such as insulation, heating systems, good design and so on.” Primarily, the questions were designed to generate the initial base of knowledge by gently probing the expert participants for their views through a set of open ended, broad based, questions. This style of questioning allowed the expert a certain degree of freedom to offer answers that reflected their personal opinions, diverse experiences and unique insights. Moreover, it was anticipated that given the opportunity to offer narrative, personalised answers would further stimulate the expert’s interest in the study. The first question of section two was essentially a brainstorming exercise. Its aim was to discover as much as possible through eliciting the experts’ opinions about how enterprises adapt. It was the least defined of the questions and gave the experts an opportunity to offer uninhibited answers. Whereas, the subsequent questions 2-4 were more directed than the first and inquired about the potentially key characteristics and core elements, including their interrelationships, which enable enterprises to be adaptive. Finally, question 5 gave the experts an opportunity to freely contribute their knowledge of AE. It was aimed at uncovering insights that potentially could be of value to this and future research. Section three of the questionnaire, titled ‘General Information’, was designed to obtain useful demographic data including the participant's email address, which was used as a unique identifier and for distributing the subsequent rounds of the Delphi. The final Exploratory Delphi 82 questions asked participants to identify the size of their organisation and the industry elements in which it operates. Administration of Round One Round One of the Delphi study was administered to 36 academic and industry experts. The group was mixed as they were from different industries, tertiary institutions, and were from the upper levels of their organisations. They were selected based on the judgement of the researcher. This is known as judgement or purposive sampling and is a non-random, or probability, sampling design. “The researcher only goes to those people who in her/his opinion are likely to have the required information and be willing to share it. This type of sampling is extremely useful when you want to ..... develop something about which only a little is known” (Kumar, 1996, p 162). The 36 experts who had confirmed their interest in the study and subsequently selected were administered the questionnaire using an online format because it is a fast and relatively inexpensive method for gathering data (Bruggen & Willems, 2009). To start, each expert was sent a confirmatory e-mail invitation to participate in the Delphi. The invitation included useful information about the study, its purpose, and a uniquely assigned URL to access the questionnaire. The Participant Information Sheet, which outlined the ethical considerations, was attached for their information. Participants were given six weeks to complete the questionnaire. A follow up reminder email was sent three weeks after the initial invitation to encourage the maximum number of experts to respond. Two of the experts completed the questionnaire offline, one preferring to do it in the presence of the researcher while the other preferred to answer the questionnaire over the telephone. Distribution of Responses Schmidt (1997) purports that response rates are important and should be reported for each round. It allows for confirmation of relevant statistics and they are also a measure of the level of importance participants’ attribute to the Delphi study. In this section the relevant Exploratory Delphi 83 response rate statistics are presented followed by relevant background information about the experts. 4.2.3.1 Response Rates As mentioned, the first round of the Delphi was open for six weeks, which allowed enough time for a significant number of the sample pool to complete the questionnaire. Thirty of 36 invited experts participated, which represents an 83.4% response rate. Of those participants, 16 were from the academic panel (9 academics and 7 pracademics) and 14 were from the industry panel (7 senior managers and 7 management consultants). The 6 experts who didn’t complete the questionnaire were equally divided between the two panels: 3 academics and 3 senior managers. This resulted in 53% of the total number of questionnaires being completed by academic experts and 46.7% competed by Industry experts. Therefore, the academic panel was marginally overrepresented in the first round. A summary of the response rate is shown in Table 4.1. Panel Sample Pool Response Count Response Rate % of Total Responses Academic 19 16 84.2% 53.3% Industry 17 14 82.4% 46.7% Total 36 30 83.4% 100% Table 4.1: Delphi Round One Response Rates 4.2.3.2 Background Information of Experts Those expert participants who completed the questionnaire had a wide range of relevant experience and work backgrounds fuelling the discovery of unique insights along with generalised findings (Hsu & Sandford 2007; Rowe & Wright, 2001). When answering the question, “Which industry/organisation type(s) best describes your organisation?” the highest percentage of experts answered ‘Education, Tertiary Institutions’ as their ‘primary organisation type’. That is, 16 out of the total number of 30 experts had an academic background. The next two primary organisational types that were reasonably well represented, in terms of the percentage of experts who had backgrounds in these industries, were ‘Business Services/Consultant’ with 16.7% and ‘Information and Communication Exploratory Delphi 84 Technology’ with 13.3%, which indicated a significant number of these experts were providing services associated with enterprise transformation. The remaining 23.3% of expert participants were evenly spread across various organisation types such as those involved in manufacturing and storage, aerospace, healthcare, and retail. Although agriculture and forestry were not represented aquaculture was. Industry omissions of note were the construction and financial services. In addition, at least half of the experts indicated a background in more than one industry that resulted in the organisational type ‘Export/Import’ and ‘Utilities/Energy/Extraction’ also represented. A summary of the experts’ work background is shown in Table 4.2. The ‘Primary Organisation Type denotes the expert’s current work context and the ‘Secondary Organisation Type’ their previous work experience. Please note that when we assigned a participant as belonging to either the Academic panel or the Industry panel we considered their overall background rather than their current employment. This accounts for Table 4.1 specifying 16 academics whereas in Table 4.2, only 14 of the expert participants indicated they are currently working in tertiary institutions. Primary Organisation Type Response Count % of Total Responses Secondary Organisation Type Airlines & Aerospace 1 3.3% Aquaculture 1 3.3% Business Services Consultant 5 16.7% 1 Education, Tertiary Institutions 14 46.7% 4 Export, Import 1 Healthcare 1 3.3% 1 Information Communication Technology 4 13.3% 2 Manufacturing 2 6.7% 1 Retail Trade 1 3.3% 1 Transport Storage 1 3.3% 3 Utilities, Energy, Extraction 1 Overall 30 100% Table 4.2: Delphi Round One Organisation Types Exploratory Delphi 85 In addition to reporting on the work backgrounds of the expert participants (Schmidt, 1997), it is equally important for this research on AE to feature the distribution of organisation size. All of the size categories were represented except for the 50-59 headcount. And overall the sample pool was equally distributed between below and above 1000. However on the basis of individual categories, the ‘above 1000 category’ was over represented accounted for 50% of the total sample pool. This is not surprising given that approximately half of the experts were from one organisation type i.e. Education, Tertiary Institutions. A summary of the organisational headcount is shown in Table 4.3. Organisation Headcount Response Count % of Total Responses 1-5 5 16.7% 6-9 3 10.0% 20-49 2 6.7% 50-59 0 0.0% 100-499 4 13.3% 500-999 1 3.3% Above 1000 15 50.0% Overall 30 100% Table 4.3: Delphi Round One Organisational Headcount In summary, considering the selection strategy was to choose an equal number of academics and industry AE experts a wide variety of organisational backgrounds was achieved. Moreover, many of the experts indicated that they had a work background in more than one industry which increased the overall ‘organisation type’ diversity. In addition, the selection process was well research and supported by theory and all the experts were considered knowledgeable about AE. Analysis of Responses This section accounts for the analysis process aimed at the discovery of insights from the qualitative data collected in Round One. The analysis process requires careful management involving the establishment of a structured method to distil the data without introducing bias (Hasson et al., 2000). First, the rationale and activities of the analysis process are Exploratory Delphi 86 outlined. Then the actual analysis and the results, which became the second round input, are discussed, 4.2.4.1 Outline of the Analysis Process The first round questionnaire consisted of five questions four of which were open-ended that focused on specific structural and functional aspects of AE and the fifth general comments question. Mostly the answers were directly related to each question but a reasonable amount of overlap was evident within the answers provided by an individual expert. This was understandable given the broad nature of the inquiry. A variety of observations were gathered through the general comments question and ranged from detailed anecdotes to conceptual understandings. Therefore, the analysis process had to be robust and provide an appropriate scaffold to support the examination of this type of unstructured, qualitative data. This analysis required an inductive exploration of the data through summarisation, identification, classification, interpretation, and model development. Graphic data displays that are meaningful, such as Matrices and Network diagrams, support this inductive analysis. They enable the “…at-a-glance format for reflection, verification, conclusion drawing and other analytic acts.” (Miles, Huberman & Saldana, 2014 p. 91). Matrices are particularly useful in the early stages of the analysis because they facilitate the summation and recording of the raw data including results. While Networks enable the construction of a connected display of those results, which in turn support interpretation and development of relationships and models (Collis & Hussey, 2013). A systematic approach was also required to guide the overall analysis process hence Miles et al. (2014) analytical method for qualitative data was adapted for this study as presented in Figure 4.2. Step 1: The experts’ responses were downloaded from the online questionnaire and the raw data was formatted and organised in preparation for the next step. This preparation involved coding and referencing the data to its source to ensure traceability. Responses were mapped to the respective expert and the date the question was answered recorded. Exploratory Delphi 87 Step 2: The data was distilled to form nuggets that emerged from the information collected about an AE. These nuggets were coded according to the question and individual expert to maintain traceability. Figure 4.2: The Analytical Process of Round One Delphi adapted from Miles et al. (2014) Step 3: Comparable nuggets were identified as meaningful subjects and patterns and in turn grouped into categories for each question before being substantiated with reference to relevant theory and frameworks in the literature. This bottom up approach allowed these categories to emerge from the data hence supporting the exploratory aims of Round One. Step 4: There was overlap amongst the answers to the questions at the individual expert level and also overlap amid multiple experts. This overlap was accounted for by a crosscomparison of nuggets and categories that emerged from each question and informed the development of the themes. The comparison involved nuggets and categories being copied Exploratory Delphi 88 across questions and experts’ responses along with the assignment of hybrid codes to each to preserve traceability. Step 5: A bottom-up synthesis of the lower level themes gave rise to the creation of the overarching AE related themes. The formation of these overarching themes was supported by the complete data set and analysis results including the cross-comparison outcomes. Step 6: Mindmap tree network structures connecting the nuggets to categories to themes, and ultimately overarching themes, were established using logical relationship connections. These branch and tree connections developed organically and guided by a multi-disciplinary understanding of AE (Chapter 2, Literature Review). Step 7: Concepts and models were created based on the information gathered from the expert participants and the subsequent analysis including the established structures and connections. These final analysis outputs directly link to the research objectives outlined in Chapter 3, Research Methodology. They also formed the basis of the following rounds of the Delphi study that facilitated the refinement and validation of the Round One research artefacts. 4.2.4.2 Analysis of Delphi Responses Round One Guided by the overall analysis process described in section 4.2.4.1, answers from the 30 completed questionnaires, which contained 8413 words (543 average per respondent), was the input data for a systematic, bottom-up and top-down synthesis method that facilitated the analysis. This analysis method was informed throughout by relevant theoretical foundations from literature such as those reviewed in Chapter 2. It is also described in the following discussion and illustrated in Figure 4.3. Step 1 of the analysis, ‘Extract Code & References Responses’. For this step each respondent was assigned a unique identifier number. The date the questionnaire was completed was recorded along with a link that mapped each question to the specific response. This allowed for the cross comparison between questions. Table 4.4 is an example of the output from this step while the complete coded and referenced data can be seen in Appendix A6. Exploratory Delphi 89 Figure 4.3: The Creation of Initial AE Concepts and Models Through Analysis and Synthesis of Delphi Results and Literature Exploratory Delphi 90 Response Text Data It changes its organisational design in the following fields to align with changing environments. * Business model * Supply chain networks * Process design * New competencies and capabilities * New cultures * New product/service design * New channels to market * New innovations supporting customer experience Table 4.4: Step 1 Prepared Text Data From Expert E26 In Step 2, ‘Identify & Develop Nuggets’ the coded data was transformed to pieces of information, also known as nuggets, to facilitate the identification of categories and themes. A consequence of this process was the elimination of redundant or duplicate data/information because each nugget defined only one simple point. The complete list of nuggets can be seen in Appendix A7. For clarification of the analysis performed in step 2, Table 4.5 is the coded data that was translated to the nuggets, some of which are also shown in Figure 4.4. Nuggets Description 1 Change organisational design to align with changing environments (E26) 2 Changes business model to align with changing environments (E26) 3 Changes Supply Chain networks to align with changing environments (E26) 4 Changes process design to align with changing environments (E26) 5 New competencies and capabilities to align with changing environments (E26) 6 New cultures to align with changing environments (E26) 7 New product/service design to align with changing environments (E26) 8 New channels to market to align with changing environments (E26) 9 New innovations supporting customer experience to align with changing environments (E26) Table 4.5: Step 2 Nuggets From Expert E26 Based on Table 4.4 In Step 3 ‘Identify & Develop Categories’ the nuggets were grouped into categories for each question using an emergent clustering approach. These nuggets and categories were then compared across questions, which led to the identification of those in common or duplicated (across questions). These instances of commonality or duplication were also recorded against the original nugget reference. This cross comparison exercise strengthened the outcomes of the analysis by allowing a more supported list of themes to emerge for each question. The nuggets obtained from all the experts were grouped into categories for each question. Exploratory Delphi 91 In Step 4 ‘Establish and Synthesise Themes’ the categories belonging to each question were combined in order to establish the emergent themes. The analysis activity involved a reassessment and cross-comparison of the categories resulting in similar categories informing respective themes and identical categories were merged beforehand. The comparison resulted in some categories, together with their clustered nuggets, reflected in other themes that also produced a comprehensive list of categories for each question. Figure 4.4 shows the output from the analysis steps 3 and 4 including the associated codes (expert source) and references (cross-references). It depicts the theme ‘Business Model’, which has 5 assigned categories, i.e. ‘Changes to Business Model’, ‘Supply Chain Networks’, ‘Product/Service’, ‘Market Channels’ and ‘Monitor and Control’, which in turn contain 7 nuggets e.g. “Align with changing environments” and “Changes to Supply Chain networks to align with changing environments.” Note that in this instance almost all of the categories and nuggets are from one expert (E26) except for the ‘Monitor and Control’ category, and its associated nugget, which is from expert ‘15’. Figure 4.4: Excerpt of Analysis Step 3 and Step 4 for Round One Question 2 Responses In Step 5 ‘Establish Overarching Themes’ the bottom-up synthesis of the lower level themes gave rise to the developed of the overarching AE related themes. This synthesis consolidated the preceding analysis steps and combined nuggets and categories from all the questions and experts. It provided a thematic overview comprised of the AE themes that had emerged from Exploratory Delphi 92 the experts contributions in Round One of the Delphi and was the basis for the development of an integrated mindmap in the next step. Step 6 focused on the identification and establishment of significant relationships that amalgamated the overarching AE themes, along with the associated lower level themes, categories, and nuggets, into a mindmap network structure. Figure 4.5 is a small excerpt that shows the overarching theme ‘AE Strategy’, which is composed of five lower level themes, which in turn are comprised of categorised coded nuggets. The mindmap facilitated the development of salient propositions and structures. And the mindmap outputs, in turn, informed and refined the understanding of the literature review and Delphi outputs. The complete mindmap, including the propositions and structures, can be seen in Appendix A8. Step 7 is dedicated to the creation of concepts and models through the synthesis of the mindmap propositions and structures, refined established ideas (theoretical foundations), and Round One Delphi outputs as illustrated in Figure 4.3. Step 7 resulted in AE concepts and associated static and dynamic models, which were subsequently incorporated in Round Two of the Delphi (refer section 4.3 and Appendix B3) The models illustrated the Key Components of an AE, identified as Strategy, Organisation, Process, and Information (SOPI), together with interrelated structures and behaviours that enabled an AE. The models depicted constructs and interactions that were well supported by the detailed findings of Round One and established theory. The static models are concerned with illustrating key elements (refer Figure 6.4), structures and behaviours (refer Figure 6.4 - Figure 6.6) while the dynamic model combines all these into an overarching transformation cycle (Figure 6.7). This cycle illustrates the AE transformation process and incorporates the static models, the Enablers, Disablers, and Characteristics including the relationships among all these individual parts. The Enablers, Disablers, and Characteristics were also outcomes from the analysis process. Enablers support the development of AE characteristics that are qualities intrinsic to an AE, which ultimately transform an AE’s vision into action whereas disablers prevent an AE from doing so (refer section 6.7). Exploratory Delphi 93 Figure 4.5: Excerpt of Analysis Step 6 for Round One Question 2 Respones Summary of Delphi Round One The output of the Round One process was the identified themes and constructs. These were used as the input for the second round of the Delphi study. The purpose of Round One was Exploratory Delphi 94 to discover the factors of an AE through engaging a panel of experts to gathering observations and insights on the phenomenon. To ensure both ease of use and academic rigour, two pilot tests were conducted with six pilot testers (refer Appendix A3). The feedback from these testers enabled the development of a refined, final questionnaire (refer Appendix A) that was administered online to experts from both academic and industry backgrounds. The questionnaire consisted of three main sections: an introduction to the study, four openended questions, a comments questions and a section to gather relevant demographic information. The adopted approach was likened to a brainstorming session as it generated rich data from a diverse panel of 30 experts in the field of AE. This data was then analysed (refer Appendix A6) and consolidated into 562 nuggets (refer Appendix A7). These nuggets were then organised using matrices, network structures, and mindmaps (refer Appendix A8) so that AE themes, structures, and relationships could be identified. The main AE themes and their interconnections were then synthesized to propose incipient concepts and models that formed the basis of the second round of the Delphi (refer Appendix B3) so they could be rated by the experts. The concepts, which were defined by the relevant AE themes and categories, were rated in terms of importance. And the models were subjected to a rating process so their validity and applicability could be evaluated. The richness of the experts’ responses and the synthesized results indicated that the focus of the study should be expanded and go beyond a static conceptualisation of an AE to that which included a dynamic illustration of the AE transformation process. This expansion was deemed to be suitable for the future Delphi rounds. Round Two: Rating of AE Factors and Models Round Two of the Delphi was informed by the outputs from the previous round of the study. It was developed using the analysed responses from Round One. It had two main objectives: to rate the importance of the identified AE themes by rating their associated factors; and evaluate the validity and applicability of the developed models. This section is dedicated to describing the design and development process for the Round Two questionnaire, which was similar to the first round though the content was based on the results of the Round One analysis. Also included, in order, is an explanation of the rating scale Exploratory Delphi 95 development and pilot testing followed by the introduction of the finalised questionnaire and to conclude the analysis of the responses. The aim of the analysis was twofold and endeavoured to: determine the level of consensus reached among the expert panel as measured by the rating results; and validate and refine the proposed models based on the experts’ comments. The output of the Round Two analyses i.e. the rated AE factors that had not reached a sufficient level of consensuses, became the input for the third round of the Delphi study. Design of Questionnaire for Round Two The purpose of the questionnaire for Round Two was to assess the factors, which defined the themes, and proposed AE models that were reflective of the experts’ responses in the previous round. Hence, the questionnaire design focused on the presentation of these factors and models together with the development of suitable rating scales to facilitate the respondents’ assessment. The design challenge was to distil the large amount of qualitative data gathered in the first round and return it to the experts in a logical, succinct repackaged form i.e. factors and models. Choices had to be made to balance the questionnaire’s traceability (reflect Round One responses), time to complete (reasonable time frame) and its ability to yield worthy results. These requirements were met and the questionnaire was subjected to pilot testing. 4.3.1.1 Development of Rating Scales The use of rating scales in social research has become common practice. Close-ended questions using ordinal scales, where the respondent is asked to select an answer from a set of options, is a popular choice in questionnaires because they measure gradations in opinions, attitudes, and behaviors (Dillman et al., 2014). Usually, the options are limited to a number of points along the scale and bind the respondent to the set of alternatives being offered. Therefore, developing an appropriate rating scale is critically important as it impacts several aspects of data quality such as, factor non-response and the measurement error (Reja et.al. 2003). When designing a rating scale there are a number of aspects that require careful thought to ensure reliability and validity. In particular, four aspects to consider are: the number of response options on the scale; the order and balance of those options, the clarity of the option Exploratory Delphi 96 meanings and; inclusion (or not) of a ‘no-opinion’ option. Each of these aspects will be explained in the following discussion. First, Krosnick and Presser (2010) suggest that the extent of the scale impacts the thought process respondents use to map their attitude to the response alternatives. Reliability and validity tend to be higher for scales with a moderate number of scale points compared to scales that are especially long or comprise of a few choices (Krosnick & Presser, 2010). In research, usually a five or seven-point scale is considered to be a moderate number of points and often the preferred option (Krosnick & Fabrigar, 1997). Lesser scales, such as three-point options, tend to limit the amount of information conveyed whereas greater scales, such as nine-point options, generally don’t provide more accurate results. Second, the order and balance of the response options can influence the respondent’s choice, including ”…choices that are plausible, confirmation-biased thinking will often generate at least a reason or two in favour of most of the alternatives in question” (Krosnick & Presser, 2010 p. 278). Furthermore, unless the researcher believes a bias exists, these choices should be balanced with an equal number of favorable and unfavorable options (Friedman & Amoo, 1999). Third, it is suggested that the response options be carefully labelled. Words rather than numeric labels are recommended to ensure a common interpretation, which the literature suggests can reduce the measurement error (Dillman et al., 2014). Lastly, research advocates that a ‘no-opinion’ option should be routinely included because it reduces the likelihood of respondents giving substantive responses rather than indicating they have no knowledge of the subject (Krosnick et. al., 2002). The inclusion of this ‘no-opinion’ scale option is supported by Turoff (1970) for Delphi studies when a consensus among the experts is not required, such as those to resolve policy issues that benefit from informed debate. Conversely, forcing a choice by not providing a ‘no-opinion’ response option is acceptable in studies where consensus is sought from a group of subject experts since these experts will have a judgment (Friedman & Amoo, 1999). Generally, there is a paucity of discussion in the Delphi literature on rating scales and their application. In IS Delphi research, for example, the focus tends to be on the reporting of results rather than details of the study design (Day & Bobeva, 2005). However, management Delphi Exploratory Delphi 97 literature does indicate that “…the most important in the Likert point scale selection is the aim of the study” (Giannarou & Zervas, 2014 p. 68). Furthermore, the authors reviewed thirty two Delphi studies in management and business and found that fifteen of those studies employed a five-point Likert scale of which most were used to investigate the level of agreement. Given that the purpose of this Delphi study is to investigate the level of agreement, the aforementioned design recommendations were followed in the second round. To rate the level of agreement for the AE factors and models five-point rating scales were employed. For the AE factors the following five response options were provided: ‘not important’, ‘slightly important’, ‘moderately important’, ‘important’ and ‘very important’. While, the five response options provided for the AE models were: ‘strongly disagree’, ‘disagree’, ‘undecided’, ‘agree’, and ‘strongly agree’. Notably, these rating scales were balanced in terms of two unfavourable and two favourable response choices separated by a relatively neutral mid-point option with easy to interpret word labels attached to each. Hence, all the design choices were informed by the recommendations gleaned from the Delphi literature and discussed in this section. 4.3.1.2 Pilot testing of Round Two Continuing the Delphi questionnaire design practice implemented in Round One (Gordon, 2003; Novakowski & Wellar, 2008) a similar pilot test was carried out for Round Two. A draft version of the questionnaire was sent via email to five experts, two academics and three practitioners, who had experience with AE and questionnaire design. The email message also contained information about the objectives of Round Two and which aspects of the questionnaire in particular the expert testers should focus on for the review (refer Appendix B1). The objectives for Round Two were: to rate the Key Themes, Enablers, Disablers and Characteristics of an AE that emerged from Round One and; to evaluate the proposed models of an AE. Similar to the Round One pilot test, the testers were asked to comment on any aspect of the questionnaire such as its purpose, form, content, question clarity, flow ease of use and duration. Exploratory Delphi 98 A well designed second round questionnaire was a prerequisite for obtaining valid ratings of the factors and models being assessed and generated the inputs for the next round of the Delphi. Once again, the feedback was considered and the questionnaire modified. The revised questionnaire was then used for Round Two of the Delphi study. The pilot testing process concluded with the following improvements being made to the Round Two questionnaire. • Rephrasing of questions to clarify their meaning and eliminate ambiguity. • Elimination of domain centric language and acronyms. • Change of rating scale to align with question. • Addition of clear descriptions, explanations and instructions to improve understanding. • Design change to make all questions compulsory. • Increase in the time-to-complete from 20 to 30 minutes. • Addition of a questionnaire progress bar. • Improvement of formatting and layout to make the questionnaire easy to use. • Elimination of grammatical and other errors. Overall the feedback from the pilot testers was that there were no significant design flaws with the questionnaire. However, the overreaching comment for improvement was to eliminate the ambiguity created by poorly phrased questions and explanations. 4.3.1.3 Final Design of Round Two The design of the Round Two questionnaire was finalised after completion of the pilot test. The questionnaire had 24 overarching questions, 19 of which contained a total of 112 factors to be rated (refer Appendix B3). Schmidt (1997) noted that there is no recommended number of factors to be ranked, “The resulting list may contain any number of issues” (p.770). For instance, many Delphi studies have contained 50 factors whereas many studies contain considerably more. However, ranking a large number of factors is time consuming and can cause panel attrition (Custer, Scarcella & Stewart, 1999). In this research a total of 112 factors was considered optimal considering the reasonable estimated (and tested) time-to-complete and the desire to accurately reflect, in terms of traceability the insights gleaned from the experts’ response in Round One. Exploratory Delphi 99 The questionnaire consisted of eleven sections, including an introduction and general demographic information sections (refer Appendix B3). It was carefully designed to guide participants through a number of sequential steps, represented by sections 2 - 10, which incrementally built the proposed AE models in a mindful way. Similar to Round One, the first section (Introduction) outlined the objectives of the questionnaire, which were: 1. Rating of Key Themes that emerged from the responses of Round One. 2. Rating of Key Enablers, Disablers, and Characteristics of an Adaptive Enterprise. 3. Evaluation of models of an Adaptive Enterprise. It also, included the estimated time-to-complete, information on answering the questions and how to obtain the results, and the consent form. Section two focused on the components of an AE that had emerged from Round One: strategy, organisation, process, and information (SOPI). Initially, the Key Components Model was presented (refer section 6.2) together with a short explanation of its development and purpose. Also, this section of the questionnaire together with the Key Components Model was the first step in incrementally building the AE models. The expert participants were then asked to rate the importance of a series of factors categorised according to each of the SOPI components. Experts were also given the opportunity to provide any other comments. Section three, the Vision to Action Model, had a unique purpose which was to provide information and together with the Key Components Model, lay the foundation for the subsequent proposed models. It introduced the well recognised (in academia and industry), Vision to Action Model along with a brief description of its structure and behaviour. The expert participants were informed that this model was the foundation for all the following proposed models. Building on the Vision to Action Model (section 3), section 4 was the next step of the AE model building process and presented the Structural Elements Model (refer section 6.3) and its description. The expert participants were asked to indicate their level of agreement with the proposition that the structural elements helped an enterprise to be adaptive. In addition, the experts were invited to provide suggestions on how the model could be improved. Exploratory Delphi 100 Likewise, the next section five, presented the Behavioural Flows Model (refer section 6.4) and its description. Experts were asked to indicate their level of agreement with the proposition that the behavioural flows helped an enterprise to be adaptive. Also, they were invited to provide suggestions on how the model could be improved. Section six, seven, and eight were centered on Enablers, Disablers and Characteristics of an AE respectively (refer sections 6.7.1 to 6.7.3) that had emerged from Round One. Participants were given a short explanation in terms of how Enablers allow an enterprise to perform adaptive activities that build AE Characteristics while Disablers inhibit their growth. The experts were then asked to rate the importance of a number of Enablers and Disablers categorised according to a series of relevant themes. The proposed Characteristics were rated similarly except they presented as a list rather than categories. Section nine built on the concepts and models introduced and appraised in the previous sections. It presented the Adaptive Zone Model (refer section 6.5) and an explanation of what it represented. Participants were familiarised with the idea that an AE needs to be balanced in terms of the SOPI components (appraised in section 2) to be adaptive. That is, the more balanced the component the more adaptive the enterprise. To verify this idea the expert participants were asked to rate each SOPI component on its associated deliberate to emergent continuum. In section ten, all of the concepts and models introduced previously in the questionnaire were amalgamated to form the Adaptive Enterprise Transformation Cycle (refer section 6.6) that was presented together with a succinct explanation of its structure and behaviour. Essentially, enablers help an AE to become more adaptive which in turn builds and grows AE characteristics, while disablers inhibit their development. AE characterises enable an enterprise to progress and remain in the adaptive zone (refer Figure 6.6), which enables it to enhance enablers and remediate disablers. The expert participants were asked three questions in this section. They were asked to rate their level of agreement with first, the proposition represented by the Adaptive Enterprise Transformation Cycle and second, the relationship between the components. Third, participants were asked to suggest how the Adaptive Enterprise Transformation Cycle could be improved. Exploratory Delphi 101 The final section 11, requested demographic information and was identical to that of Round One (refer section 4.2.1.2) including an acknowledgment of the expert participants contribution to the study. Distribution of responses Schmidt (1997) purports that response rates for each round should be reported for a number of reasons, as mentioned previously in Round One, they can also be an indication of waning interest on the part of the participants. In this section the response rates for Round Two of the Delphi are presented followed by the background information of the Round Two expert participants. 4.3.2.1 Response Rates An email invitation to complete the second round of the Delphi study was sent to the 30 experts who participated in Round One. Similar to the Round One invitation it contained useful information about the second round of the study, its purpose, and a uniquely assigned URL to access the questionnaire. The Participant Information Sheet, which outlined the ethical considerations, was attached for their information (refer Appendix A4). Two weeks after the initial invitation a reminder email was sent to those yet to complete the questionnaire. This was repeated two weeks later and the second round of the Delphi was closed after nine weeks. This resulted in a 93% completion rate, 28 of the 30 invited experts completed the Round Two questionnaire. Given the extremely low attrition rate for Round Two the details of the response rate details for this round were similar to that of Round One (refer Table 4.6). The 28 experts, who completed the questionnaire, were somewhat evenly spread across the 4 sub-panel groups i.e. 7 academics, 7 pracademics, 7 senior management and 7 management consultants. Panel Sample Pool Response Count Response Rate % of Total Responses Academic 16 14 87.5% 50% Industry 14 14 100% 50% Total 30 28 93.3% 100% Table 4.6: Delphi Round Two Response Rates Exploratory Delphi 102 The two experts who failed to participate in the second round were from the academic panel. Consequently, the background statistics and information of the experts did not change significantly from the first round. Accounting for these two drop-outs the experts came from the organisation types shown in Table 4.7. Primary Organisation Type Response Count % of Total Responses Secondary Organisation Type Airlines & Aerospace 1 3.57% Aquaculture 1 3.57% Business Services Consultant 5 17.85% 1 Education, Tertiary Institutions 12 42.85% 4 Export, Import 1 Healthcare 1 3.57% 1 Information Communication Technology 4 14.28% 2 Manufacturing 2 7.14% 1 Retail Trade 1 3.57% 1 Transport Storage 1 3.57% 3 Utilities, Energy, Extraction 1 Overall 28 100% Table 4.7: Delphi Round Two Organisation Types Also, the Round Two organisational headcount was very similar to Round One except the over 1000 number was reduced to 13 and now accounted for 46.42% of the total sample pool, as shown in Table 4.8, which reflected the loss of the 2 academic experts. Organisation Headcount Response Count % of Total Responses 1-5 5 17.85% 6-9 3 10.71% 20-49 2 7.14% 50-59 0 0.0% 100-499 4 14.28% 500-999 1 3.57% Above 1000 13 46.42% Overall 28 100% Table 4.8: Delphi Round Two Organisational Headcount Exploratory Delphi 103 As shown in Table 4.7 and Table 4.8, the ‘Education, Tertiary Institutions’ organisational type and correspondingly ‘Above 1000’ organisational headcount was overrepresented. However, the aim of in this study was to gather the expert’s insights on AE, which was based on their accumulated experience and not limited to their current occupations. This is indicated in Table 4.7 that shows the experts’ ‘Secondary Organisation Type’ when this information was offered, and supports the assertion that the overrepresentation did not compromise the quality of information and depth of knowledge. Analysis of the Responses The correct analysis of the results from each round of the Delphi is critical. Accuracy and rigour must be maintained for each analysis because the validity of the overall Delphi study result is contingent on them. Three types of questions were used in Round Two that resulted in three types of responses: quantitative responses from the rating questions; qualitative responses about the proposed models and comments regarding the various aspects and content of the questionnaire. These responses were analysed according to the analysis process described in the next section, the outputs from the process, being the actual results, are discussed in the remaining sections of this chapter. 4.3.3.1 Round Two analysis process Schmidt (1997) developed a generally accepted set of principles to guide the quantitative analysis of non-parametric Delphi rankings (Okoli and Pawlowski 2004) such as those used in this study. However, in Round Two qualitative responses, in the form of feedback and comments, were also collected. Hence, a structured analysis approach (Reefke, 2012) was applied to manage the analysis process that incorporated Schmidt’s (1997) principles along with evaluation of the qualitative comments (refer Figure 4.6). First, the data was downloaded and cleaned by checking it for faults such as duplicate entries or missing data points. Second, logical values were assigned to the rating options and then the mean was calculated for each question factor to facilitate the statistical analysis. Third, the level of consensus reached for each factor was established through statistical analysis. Finally, Exploratory Delphi 104 factors that had not reached an acceptable level of consensus were identified for revaluation in the third round of the Delphi. 4.3.3.2 Round Two analysis process – Collection and Ranking The responses were downloaded from the online questionnaire website and checked for completeness. Of the 30 experts invited to participate in Round Two, 28 completed all parts of the questionnaire albeit at different times over an eight week period. Also, two of the experts preferred to complete the questionnaire with the researcher on a one-to-one basis, one was conducted over the telephone and the other face-to-face. All together 28 full datasets were included in the analysis process. Figure 4.6: Outline of the Analysis Process for Delphi Round Two Three types of rating scales were used in the second round, namely: importance, hindrance and agreement. Each was in the form of a five-point Likert scale that allowed them to be statistically analysed in a uniform way. To start with the rating scales results were assigned numerical values in ascending order from one to five, one being the lowest level of importance or agreement and five being the highest, as shown in Table 4.9. Then utilising these assigned numeric values, mean ratings and the standard deviations of responses were calculated. The means ratings were used to compare the factors by ranking them according to their level of importance/agreement and the standard deviation indicated their dispersion in terms of how tightly they were clustered. These first two steps in the analysis process enabled the overall Exploratory Delphi 105 assessment of the rating responses though such means as rankings, identification of similarities and differences among factors and the detection of outliers. 1 2 3 4 5 Importance Scale Not Important Slightly Important Moderately Important Important Very Important Hindrance Scale No Hindrance Little Hindrance Moderate Hindrance Hinders Greatly Hinders Agreement Scale Strongly Disagree Disagree Undecided Agree Strongly Agree Table 4.9: Rating Scale Values for Delphi Study 4.3.3.3 Determining Consensus The goal of the Round Two analysis was to identify the factors that had reached an acceptable level of consensus among the Delphi respondents. In Delphi studies, the use of the descriptive statistics, central tendency, and dispersion from the mean are generally accepted as valid ways of conducting consensus measurement (Hsu & Sanford, 2007; Diamond et al., 2014; Forster & von der Gracht, 2014). Therefore, in this study the rating means and standard deviation (SD) for each factor were examined in order to determine consensus. Yet in terms of criteria to determine the level of consensus, there is little agreement on how consensus is operationalised and the recommended thresholds. Diamond et al. (2014) note that, “Although consensus generally is felt to be of primary importance to the Delphi process, definitions of consensus vary widely and are poorly reported improved criteria for reporting of methods of Delphi studies are required p.401.” The authors review 100 Delphi studies published between 2000 - 2009 and found that in studies where consensus was defined as percentage (agreement) or proportion (ratings within a range) the reported threshold for its determination varied from 50% - 97% with the median being 75%. These findings support the idea that a consensus is defined when a certain percentage of responses fall within a predefined range (Millar et al, 2006). Examples of this predefined range is the proposed criterion of 80% of the responses fall within two options on a seven-point Likert scale (Ulschak, 1983) or likewise Mitchell (1991) suggests a threshold of 75%. An alternative or complimentary criteria often used is the percentage of respondents who agreed with one of the scale options. Loughlin and Moore (1979) Exploratory Delphi 106 purport that an appropriate threshold for this consensus criterion is 51% on a five-point Likert scale. Taking into account the aforementioned recommendations and practices, the criteria and corresponding thresholds adopted to define consensus in this Delphi were a) 51% of the respondents agreed with one of the scale options and b) 80% percentage of responses fall within two scale options. These criteria were used to assess consensus in both Round Two and Round Three of the Delphi study together with a judgement assessment by the researcher to confirm that the only factors removed were those that indicated an appropriate level of agreement. The following section describes the analysis activities performed in steps 3 and 4 of the overall analysis process. These steps facilitate the determining of consensus and identifying factors to be extracted for rerating in the third and final Delphi round. 4.3.3.4 Delphi Round Two – Consensus and Findings The second round questionnaire consisted of 18 rating questions using a five-point rating scale to assess 112 factors in terms of level of importance or agreement. Two overall measures were used to determine if a consensus was reached for each factor; the mean rating and the standard deviation. These measures were applied in three different ways to determine consensus, which is shown in Figure 4.7. 1. When the mean rating for the factor was greater than 4.5 (converging on option 5) a high level of agreement had been reached that this factor was very important. No further assessment was therefore required. There were 8 factors that met this criterion in the second round dataset. 2. When 51% of the responses were within the same rating option and also 80% of the responses fell within two joining rating options. There were 32 such factors in the second round dataset. 3. When 51% of the responses were within the same rating option and ≈ 80% of the responses fell within two joining rating options or vice versa, a judgement assessment was made by the researcher to establish the level of importance to both the respondents and the overall Delphi. There were 24 such factors in the second round dataset. Exploratory Delphi 107 Figure 4.7: Process Map of the Determine Consensus Process Including the Criteria for the Delphi Study Exploratory Delphi 108 In order to demonstrate the analysis process to determine consensus outlined above, the responses to question 5, which rated the importance of the ‘Information’ theme factors, will be assessed using the predefined criteria (refer factors 29 - 36 Appendix B4). There were eight factors to be ranked for this question. Factor 1 had a high mean rating of 4.64, which met the first criteria and indicated a sufficiently high level of agreement that this factor was ‘very important’. Given this high mean, it follows that the SD was low i.e. 0.56, and 96% of the responses fell within two joining rating options and 68% fell within the same rating option i.e ‘very important’. Hence, further insights from another round of testing were unlikely so this factor was removed for the third round. This was the underlying logic that was applied to all factors that were removed. Continuing with the analysis of the ‘Information’ theme, factors 3, 4 and 6 were also removed because they met the third consensus criteria. Each factor had a high mean rating over 4 and over 80% in two rating options while all were close to 51% falling within the same rating option. The remaining factors 2, 5, 7 and 8 did not meet any of the three criteria to measure consensus, although factors 2 and 5 had a mean rating of over 4, they were retained for the third round. With regard to factor 2 in particular, which was aimed as assessing the importance of having, “integrated enterprise-wide information systems and data that is accessible by all employees”, the responses data indicated (only 68% of the responses fell within two joining rating options.) that the phrase “accessible by all employees” was possibly misinterpreted so the question was rephrased for the third round to eliminate any ambiguity. In summary, of the 8 ‘Information’ theme factors only 4 factors were assessed as having reached a sufficient level of consensus, which required the remaining 4 to be rerated in Round Three. This 50% rerating outcome for this theme was a little higher than the overall rerating percentage of 43%. A similar analysis was conducted for all the other factors. Notably, 11 out of the 13 Disabler factors (86%) did not achieve a sufficient level of consensus. The probable reason for this was the negative form of the questions and rating scale being interpreted as two forms of negation making the questions ambiguous. In total 112 Round Two factors and Exploratory Delphi 109 model items were assessed, of which 104 were factors and 8 were model items, resulting in 48 factors not reaching a sufficient level of consensus and therefore included in Round Three for rerating. Validation and Refinement of Models Round Two also required the expert participants to evaluate and validate the models developed from the insights gained in the first round (refer section 4.2.1.2). The Round Two questionnaire contained the five proposed AE models. Each model was described in terms of its purpose, structure and behaviour. The Key Components Model was predominately built on seminal literature and theory hence only the factors and components (SOPI) of the model were validated. The remaining four models, namely the: Structural Elements Model, Behavioural Flows Model, Adaptive Zone Model, and Adaptive Enterprise Transformation Cycle were assessed in terms of their validity and applicability. Two types of responses were sought for these models: compulsory rating scales to assess the models’ validity and applicability; and optional feedback comments including suggestions for improvement and approval. In addition, the questionnaire design was such that in terms of the models, the experts were taken through a stepwise incremental model development process starting with the foundational concepts being depicted in the Key Components Model. This model was a fundamental building block for the Structural Elements Model, which was developed further to create the Behavioural Elements Model, which in turn spawned the Adaptive Zone Model and finally all the models were incorporated to form the principal components of the Adaptive Enterprise Transformation Cycle. In this section, the respondents’ evaluations of the five AE models developed and tested through the Delphi study are presented including the analysis of the quantitative ratings and qualitative comments. 4.3.4.1 Validation of the Key Components Models The Key Components Model that integrated the foundational concepts, unlike the four other models, was validated through rating theme factors for each of the model’s components: Strategy, Organisation, Process, and Information (SOPI) (refer section 6.2). Exploratory Delphi 110 4.3.4.2 Quantitative Validation of the Structural Elements Model The foundational Key Components Model was used as a building block for the Structural Elements Model. The Structural Elements Model was valuated and validated through two mechanisms structured as two questions. First, the experts’ level of agreement on a five-point scale, which was analysed using the statistical measures outlined in 4.3.3.3 and second, through the qualitative comments gathered from them about how the model could be improved. The first question asked whether the structural elements, as illustrated in the model and explained in the description, could help an enterprise to be adaptive. The mean rating was 4.11, with a SD of 0.63, indicating that it was situated between ‘Agree’ and ‘Strongly Agree’. Given, the 28 respondents, 17 chose the ‘Agree’ option and 7 selected ‘Strongly Agree”, which represented 86% choosing one of these options, 4 indicated they were ‘undecided’ (refer Table 4.10). Delphi Round 2 Question Mean Rating Standard Deviation Highest Category % Two Joining Categories % Structural Elements Model Applicability 4.11 0.63 60.71% 85.71% Table 4.10: Round Two Statistics of the AE Structural Elements Model None of the experts disagreed with the proposed model resulting in a high level of agreement and the threshold for consensus being met (refer section 4.3.3.4). Therefore, this question was removed from the third round. 4.3.4.3 Qualitative Validation of the Structural Elements Model The second question invited the experts to comment on the Structural Elements Model and suggest improvements. This question yielded 19 comments and of those approximately 60% commented on the inclusion of connections among the elements, a typical comment in this regard was, “I think this needs more detailed descriptions on the interactions within the model and its elements.” Upon reflection, this high number of comments on the aspect of interaction was a function of the questionnaire design that, as mentioned, incrementally built the models through a series of iterations to facilitate understanding them. The design purpose of the Structural Elements Model was to illustrate the salient structural elements of an AE. It is a static model and as such does not show interactions among the Exploratory Delphi 111 elements. It is worth noting that these ‘interactions’ were embedded and present in the following model of the Round Two questionnaire i.e. the Behavioural Model. Still, comments on other aspects of the model were useful and listed in Table 4.11. Comment no. 1 although succinct, it was thoughtful, indicating that the expert could clearly understand the premise of the model including the structural elements and their roles, which clearly validated it. It also contributed to the development of the model in terms of how SOPI is conceptualised at different levels of the enterprise. Number Experts Comments 1 “My gut feeling tells me that each component of SOPI not all equally weighted at each organisational level.” 2 “All three levels are also encompassed by another SOPI?” 3 “It does not show the relationship to the external environment at different levels.” 4 “Unclear how the term 'interwoven' is being applied. The image is indicating that the SOPI at each organisational level exists independently of the others; however, in reality there will be multiple overlapping instances of SOPI.” 5 “Not sure about Strategy - tactical - operational. Do we need "tactical"? Feels like an unnecessary level - sort of old-fashioned management layer.” Table 4.11: Round Two Verbatim Comments on the AE Structural Elements Model Comment no. 2: Like the previous, this comment also indicated a clear understanding of the model and supported its validity. It reinforced the underlying premise that the SOPI elements were present in an AE at both the macro and micro level even though this was not explicitly shown in the model. Comment no. 3 was interesting in that it did not address the models structural elements per se. Rather, it highlighted the dimension of context as an essential aspect of an AE and the importance of capturing and indicating this in proposed models. In response, context was implicitly embedded in the behavioural model that followed. However, this comment reinforced the requirement for this dimension to be explicitly addressed in one form or other. Comment no. 4 targeted the design of the model and emphasised that SOPI in reality was indeed a cybernetic system. Here again, this idea was implicit in the Structural Elements Model and reinforced in the following Behavioural Model. Essentially, this comment not only supported this as a fundamental aspect of an AE but it also emphasised that it was an important element of AE models. Exploratory Delphi 112 Comment no. 5 questioned the relevance of three organisational levels in the context of a 21st century AE, specially, the tactical level. The existence of a middle management level is well accepted in the IS discipline (Gorry et al. 1989) and likewise the term tactical is fundamental to Anthony’s seminal organisational model the Anthony Triangle (1965). The term may be questioned as ‘old fashion’ but in reality this middle level exists in most organisations to achieve specific management and control objectives. 4.3.4.4 Refinement of the Structural Elements Model In summary, based on the quantitative and qualitative results from Round Two a consensus on the applicability of both the Key Components Model and the Structural Elements Model was confirmed (refer section 4.3.4.2 and section 4.3.4.3).The quantitative assessment revealed a high level of agreement that each organisational level with their Key Components of SOPI could help an enterprise to be adaptive. Likewise, the expert’s comments mainly supported this premise and thus confirmed the Structural Elements Model’s validity. Their suggestions for improvements (comments) were either a) already present in the model, or b) present in the following Behavioural Flows Model or c) beyond the context of the model or d) beyond the scope of the study. The final Structural Elements Model is presented in section 6.3. 4.3.4.5 Quantitative Validation of the Behavioural Flows Model The Behavioural Flows Model was also evaluated and validated through the aforementioned mechanisms of a quantitative and qualitative question. The quantitative question assessed the models applicability and relationship among the elements in terms of whether the deliberate and emergent behavioural flows help an enterprise to be adaptive. While, the qualitative question garnered suggestions on how the model could be improved. The results of the first question are presented in Table 4.12. Delphi Round 2 Question Mean Rating Standard Deviation Highest Category % Two Joining Categories % Behavioural Flows Model Applicability 4.36 0.56 57.14% 96.43% Table 4.12: Round Two Statistics of the AE Behavioural Flows Model Exploratory Delphi 113 The mean rating was 4.36, with a SD of 0.56, indicating that it was situated between ‘Agree’ and ‘Strongly Agree’. Sixteen experts chose the ‘Agree’ option and 11 selected ‘Strongly Agree”, which represented a high 96% choosing either of these options. The remaining expert indicated s(he) was ‘undecided’. Hence, a high level of agreement was achieved and the threshold for consensus was well met (refer section 4.3.3.4) resulting in this question being removed from Round Three of the Delphi. There was almost unanimous agreement among the experts for the proposed Behavioural Flows Model. The level of agreement was 10% higher than the 86% achieved for the Structural Elements Models. A plausible explanation for this is that the Behavioural Flows Model immediately addressed over 60% of the comments/suggestions for improvement offered by the experts for the previous Structural Elements Models. 4.3.4.6 Qualitative Validation of the Behavioural Flows Model The second question asked for improvement suggestions and it yielded 15 comments, however 5 of these were general observations about organisational behaviour and not specific to the question. Of the 10 model specific comments shown in the table below (refer Table 4.13) were a positive reinforcement of the model’s design while the remaining 7 indicated there could be some potential interpretation issues. Comment nos. 1 - 3 showed that some of the experts were very positive about the model and its design. They indicated that they understood the proposed concept and liked its graphical conceptualisation in the model. Comment nos. 4, 8 and 9 suggested more detail, such as arrows, to indicate the directional flows. This suggestion was also made by the pilot testers and thoroughly considered but it was decided it would be counterproductive because the behavioural flows illustrated in the model represented simultaneous, internal and external, vertical and horizontal flows. Attempting to depict these with multiple directional indicators, such as arrows, would be conceptually confusing and compromise the parsimonious model design. Comment no.5 questioned the cogency of the model in general and particularly the use of the terms ‘deliberate and emergent’ rather than ‘top down and bottom up’. Granted, both terms have some common meaning in the context of organisational theory. However, given the Key Components of an AE defined in this research i.e. SOPI (strategy, organisation, process and Exploratory Delphi 114 information), the term ‘deliberate and emergent’ is more apt and therefore, applicable because it is meaningful in each of these theoretical domains. Comment nos. 6, 7 and 10 indicated that some experts had various degrees of difficulty in comprehending the model and emphasised the need for possibly a more approachable description and illustration. No. Experts Comments 1 “Stronger connotations of 'change' than the prior model; hence, closer to 'adaptive' (When I look at the model I look at the lines as reflecting 'flow' ... on the 'emergent' side I therefore see a 'vision-based' flow down from the 'vision' box to the 'strategic' box. If I look at this in terms of 'white space' the white space indicates a 'downwards' direction from vision to strategic on the 'deliberate' side ... which is how I infer this is supposed to be perceived?)” 2 “This is a nice simple conceptual diagram, appears to clarify the concept well.” 3 “Great!” 4 “Put some arrows to make them flow?” 5 “I like the model, however it appears to me that only the top down (deliberate) strategies affect the actions. Also the bottom up (emergent) strategies, affect the vision. I am thinking both the deliberate and emergent strategies should be affecting first the actions and then the vision.” 6 “I'm afraid this model doesn't make too much sense to me. It is visually appealing, but I find it hard to interpret. It would appear that the main point is that emergent flows from each level inform vision, while deliberate flows inform action? Perhaps it is the terms that have thrown me off, but 'top down' and 'bottom up' are usually used in terms of organisation structure. Top down being information flows from higher, more strategic levels to lower, more operational levels, and bottom up being the reverse - flows from the coal face back to the tacticians and strategisers. To me, these are not necessarily synonymous with the 'deliberate' and 'emergent' terms that have been used. Also, I would have thought that there might be a difference in information, control and learning flows? Ideally, you'd want a model that can identify which sets of flows are most important to enable adaptability - presumably they aren't all equally important.” 7 “Emergent may be closer to 'action' while deliberate closer to 'vision'?” 8 “More flows between the levels.” 9 “Another blue line from "vision" down to "action" (i.e. indicating top to bottom and bottom to top) rather than just the lines as they are.” 10 “Inclusion of a more detailed description of the workings of the model so that it becomes easier to comprehend. While I think that the flows are important, I don't think that flows alone can create an adaptive enterprise. What about control mechanisms or enterprise functions to steer and utilise the flows? Maybe these should be included in the model somehow.” Table 4.13: Round Two Verbatim Comments on the Behavioural Flows Model Considering this, the comments were partially aligned with the previous suggestions for more explicit indications of flow direction for instance. Again, including more graphical detail in the Exploratory Delphi 115 model would paradoxically detract rather than enhance the ability for the recipient to comprehend it. Also, the models in the questionnaire were high level models. The much lower order detail, such as those mentioned in comment no. 10, “…control mechanisms or enterprise functions to steer and utilise the flows…” although not explicit, were implicit in the various model aspects and elements. 4.3.4.7 Refinement of the Behavioural Flows Model In summary, the almost unanimous quantitative validation of the model indicated that its depiction did not require any changes (refer section 4.3.4.5 and section 4.3.4.6) and mitigated, to some extent, the few favourable qualitative critiques. Furthermore, analysis of the comments revealed that much of the criticism was attributed to interpretations rather than the model logic (refer section 4.3.4.6). The Behavioural Flows Model attempted to incorporate a number of complex concepts into a parsimonious model. Describing and designing a model to achieve this by its very nature will engender slightly different interpretations based on the individual’s perspective. In this case, the validity of the model was measured by the overall agreement from the respondents, which was almost undisputed. Hence, no change was made to the Behavioural Flows Model, or the description, which is presented in section 6.4. 4.3.4.8 Quantitative Validation of the Adaptive Zone Model The Adaptive Zone Model was validated through four quantitative questions. The individual questions were designed to test the proposed equilibrium point for an AE on the deliberate emergent continuum specific to each of the four Key Components i.e. SOPI. The experts were asked to indicate their level of agreement on a five-point scale with the middle point ‘3’ being the equilibrium point. The questions asked in turn, which various approach was best for an AE when planning, developing and managing each SOPI component. Statistical measures as discussed in section 4.3.3.3 were used for the quantitative analysis of the responses and the results of which can be seen in Table 4.14. The mean rating for the Strategy component was 3, with a SD of 0.47, indicating that it was situated on the equilibrium ‘Blended’ option. Twenty two experts chose this ‘Blended’ option, 4 chose the point between ‘Blended’ and ‘Emergent’ and 2 chose the point between ‘Blended’ and ‘Deliberate’. This indicated a 79% level of consensus for the equilibrium point, which was Exploratory Delphi 116 a ‘Blended’ approach to Strategy, and 93% of the responses were within two adjacent rating options therefore exceeding the two statistical measures for consensus. The mean rating for the Organisation component was 2.9, with a SD of 0.33, indicating that it was situated on the equilibrium ‘Flexible’ option. Twenty five experts chose this equilibrium option, 1 chose the point between ‘Flexible’ and ‘Chaotic’ while 2 chose the point between ‘Flexible’ and ‘Controlled’. This indicated an 89% level of consensus for the equilibrium point, which was a ‘Flexible’ approach to Organisation, and 96% of the responses were within two adjacent rating options resulting in consensus. Delphi Round 2 Question Mean Rating Standard Deviation Highest Category % Two Joining Categories % Adaptive Zone Strategy Component 3.07 0.47 78.57% 92.86% Adaptive Zone Organisation Component 2.96 0.33 89.29% 96.43% Adaptive Zone Process Component 2.71 0.85 64.29% 82.14% Adaptive Zone Information Component 3.07 0.38 85.71% 96.43% Table 4.14: Round Two Statistics of the Adaptive Zone Model The mean rating for the Process component was 2.7, with a SD of 0.85, indicating that it was situated near the equilibrium ‘Loosely Coupled’ option. However, the SD indicated that the responses were markedly more dispersed compared to the other SOPI components. Eighteen experts chose the equilibrium option, 1 chose the point between ‘Loosely Coupled’ and ‘Decoupled’ while 1 chose ‘Decoupled’. Furthermore, 5 chose the point between the equilibrium option and ‘Tightly Coupled’ while 3 chose ‘Tightly Coupled’. This indicated a 64% level of consensus for the equilibrium point, which was the ‘Loosely Coupled’ approach to Process, and 82% of the responses were within two adjacent rating options meeting the consensus criteria. The mean rating for the Information component was 3 with a SD of 0.38, indicating that it was on the equilibrium ‘Dynamic’ option. Twenty four experts chose the equilibrium option, 3 chose the point between ‘Dynamic’ and ‘Ad Hoc’ and 1 chose the point between ‘Dynamic’ and ‘Static’. This indicated an 86% level of consensus for the equilibrium point, which was the ‘Dynamic’ approach to Information, and 96% of the responses were within two adjacent rating options resulting in consensus. Exploratory Delphi 117 4.3.4.9 Qualitative Validation of the Adaptive Zone Flows Model Each of the four quantitative questions was followed by a comments section that gave the experts an opportunity to make remarks about any aspect of the particular Adaptive Zone key component. These sections yielded 7 comments, shown in Table 4.15, which overwhelmingly supported the proposition of an adaptive zone that was propose. However, a couple of comments of note, comments 2 and 3, endorsed the importance of context when considering the Adaptive Zone of an AE. Context was explicitly mentioned in the description hence these comments, validate the description in particular. No. Experts Comments 1 “Strategy - There must be a balance so an adaptive enterprise does not fall into complete anarchy or randomness.” 2 “Organisation - I think it requires both, Perhaps it would depend on the context or environment that the firm was operating within?” 3 “Organisation – contextual.” 4 “Organisation - Edge of Chaos.” 5 “Process - I think processes should be tightly coupled, which makes it hard for me to see where or when they may be fruitfully less-coupled.” 6 “Process - External: loosely coupled Internal: tightly coupled.” 7 “SOPI - I see in most cases that adaptive enterprises need elements of 'control and chaos'; there has to be some level of control in place, providing staff with the ability to be flexible and creative in their approaches, to encourage the addictiveness so there is, paradoxically, a simultaneity to holding a 'controlled' state and a 'chaotic' state (same for all; exception being 'process' category as I mentioned - I don't see how to decouple processes being entirely useful even in adaptive enterprises.)” Table 4.15: Round Two Verbatim Comments on the Adaptive Zone Model 4.3.4.10 Refinement of the Adaptive Zone Model The statistical validation process indicated that further refinement of the Adaptive Zone Model was not required. This was supported by the high level of consensus attained for each of the 4 components of the model and applied to the description and the way it was depicted. Furthermore, as part of the validation process the experts were given the opportunity to make comments, if they so wished on any aspect of the model and its description. Given the high level of consensus that was achieved for both the model and description, no changes were made Exploratory Delphi 118 and it was removed from the third round of the Delphi. A full account of which is provided in Appendix B4. 4.3.4.11 Quantitative Validation of the Adaptive Enterprise Transformation Cycle The final model of this research assessed through the Delphi was the Adaptive Enterprise Transformation Cycle. This is a dynamic model that depicts the process of transforming an enterprise to an AE. The model is an amalgamation of the key concepts and models generated through the first round of the Delphi and concurrently validated in this second round. Similar to the other four models previously discussed, it was evaluated and validated through the mechanisms of two quantitative questions and a qualitative question. The results of the two quantitative questions are presented in the table below (refer Table 4.16). Delphi Round 2 Question Mean Rating Standard Deviation Highest Category % Two Joining Categories % Adaptive Enterprise Transformation Cycle Applicability 4.04 0.58 67.86% 85.71% Relationships between model components 4.00 0.61 64.29% 82.14% Table 4.16: Round Two Statistics of the Adaptive Enterprise Transformation Cycle The first quantitative question asked if the process depicted is applicable for enabling an enterprise to transform and become adaptive. The mean rating was 4.04, with a SD of 0.58, indicating that it was situated between ‘Agree’ and ‘Strongly Agree’. Nineteen experts chose the ‘Agree’ option and 5 selected ‘Strongly Agree”, which represented 86% choosing either of these options. Four experts indicated they were ‘undecided’ resulting in no disagreement with the applicability of the process. Hence, there was a high level of agreement and the thresholds for consensus were well met (refer 4.3.3.4). The purpose of the second question was to assess the relationships among the components of the cycle, the results virtually mirrored that of the previous question. The comparison revealed the mean rating of 4.04 was similar and the SD of 0.61 was slightly higher. This was due to 1 less expert choosing the ‘Agree’ option and 1 more indicating they were ‘undecided’. Hence, 18 experts chose the ‘Agree’ option, 5 selected ‘Strongly Agree” and 5 indicated ‘undecided’, which represented an 82% choosing either of these options. Again, this represented a high level Exploratory Delphi 119 of agreement and met the thresholds for consensus resulting in the two quantitative questions being removed from the third round. 4.3.4.12 Qualitative Validation of the Adaptive Enterprise Transformation Cycle The qualitative question asked for improvement suggestions and yielded 11 comments. Similar to the pattern of comments generated by this question for the other models, 4 of these were considered general observations (comments 3-5 and 8) about enterprise transformation and not specific to the model. Of the 7 model specific comments shown Table 4.17, 3 explicitly endorsed the model and the others suggested additional directional indicators. There was only one comment that indicated there potentially could be some interpretation issues. Comment nos. 1, 4 confirmed the validity of the underlying concepts that supported the model. Yet, comment 4 raised a concern that they were not convinced that interweaving the deliberate and emergent is key for an enterprise to convert vision into action. However, this research does not make this claim rather the thesis asserts that it is a set of foundational concepts integrated in a way that enables the translation of vision into action and helps an enterprise become adaptive. Comment nos. 2, 6 - 7 and 11 suggested more graphical detail to indicate the interaction effects on each of the components enabled by the relationships and comment 11 proposed naming those relationships. The relationships and interactions are core elements of the model they indicate the process flow and potential outcomes. Ideally, they enable a positive transformation cycle as outlined in the description. However, the opposite outcome is also feasible under certain environmental conditions albeit temporary. Hence, the principal aspect of the model is the transformation process and for this reason it was decided that explicit indicators of ‘ideal’ positive or negative effects would conceptually limit the scope of the model. Comment no. 9 showed that one expert had issues interpreting the model, particularly the Adaptive Zone element. Indeed, the model incorporates a number of complex concepts so it is an endorsement of its parsimonious design and the incremental design process that only one expert had difficulty understanding it. Previously, this expert indicated an understanding of the Adaptive Zone Model and ‘agreed’ with the Adaptive Enterprise Transformation Cycle Exploratory Delphi 120 but was ‘undecided’ about the relationships. These relationships received a high level of agreement so this comment was dismissed. No. Experts Comments 1 “It says: "action by interweaving the deliberate and emergent while Disablers hinder an enterprise from doing so. As an enterprise becomes more adaptive"; Is 'action' always 'adaptive'? I've had to think about the cycle a couple of times, but the concepts make sense.” 2 Picture does not convey the explanation. May need + sign for enablers and - sign for disablers. 3 Disruption could be required to redirect activity or to accelerate transformational change. 4 Well, the basic idea that being in the adaptive zones enhances enablers of adaptively and suppresses disablers is sound. The bit I'm least convinced about is that interweaving the deliberate and emergent is the key thing that allows an organization can turn vision into action. 5 Characteristics help the organisations to 'shepherd' through 'pastoral activity' so we remain within the adaptive zone or move towards it. 6 Clarification may be required on the following: The arrow from the Adaptive Zone to the Enablers/Inhibitors imply an influence on these enablers/inhibitors - is that intended, how would the influence be asserted? - while it is more likely that the Adaptive Zone influences how the Enterprise deals with the Enablers/Inhibitors. Should there be an arrow from Adaptive Zone straight to Enterprise? 7 At the moment both Enablers and Disablers are viewed as 'equal' in the model. If the idea is that "The adaptive zone allows an enterprise to enhance Enablers and remediate Disablers of enterprise adaptation", then perhaps the Disablers should be represented as a 'smaller circle' in the model. 8 I feel that technological changes will have more effect in creating adaptive enterprises in the future. For an example organizations are creating social business models. So how quickly an organization adapts this kind of a change will decide whether an organization can survive or not. 9 The Adaptive Zone part of the Adaptive Enterprise Transformation Cycle is too complex...I'm unable to think clearly while looking at it 10 I generally agree with the elements and the cyclical connections. It is however not clear how the characteristics steer the enterprise into the adaptive zone. Where is the adaptive zone meant to be? Is it different between organisations? Does being in the adaptive zone help to build enablers and reduce disablers? Maybe this should be reflected in some fashion. Somehow the cycle seems not dynamic enough, e.g. is there a progression? Are there common starting points or end points? 11 Name the relationship between the elements? Table 4.17: Round Two Verbatim Comments on the Adaptive Zone Model Exploratory Delphi 121 Comment no. 10 endorsed the model including its connections overall. Yet, like the previous comments it also advocated more directional indicators to enhance the dynamism of the model along with a number of queries related to the relationships depicted. The interpretation of the model is supported by the current description and when these two are combined the queries raised by the expert are adequately addressed. Thus, there was no benefit from changing either the model or its description. 4.3.4.13 Refinement of the Adaptive Enterprise Transformation Cycle In summary, the quantitative validation of the model through high levels of agreement indicated that its applicability and relationships did not require any changes (refer section 4.3.4.11 and section 4.3.4.12). In addition, analysis of the comments revealed that in general experts agreed with the model logic in terms of the transformation cycle but suggestions for improvement focused on additional directional detail to clarify interpretation rather than model logic. The Adaptive Enterprise Transformation Cycle attempted to incorporate a number of complex concepts into a parsimonious dynamic model. Describing and designing a model to achieve this by its very nature will engender slightly different interpretations based on the individual’s perspective. In this case, the validity of the model was measured by the overall agreement from the experts, which was almost undisputed. Hence, no change was made to the model, or the description, which is presented in section 6.6. Summary of Delphi Round Two The objectives of second round were to rate the key themes and associated factors (Enablers, Disablers and Characteristics) of an AE and evaluate the models, which resulted from the first round. A pilot test was carried out before the second round questionnaire was implemented, which resulted in refinement of the instrument in terms of descriptions and approachability. The questionnaire comprised of eleven sections the first of which provided general information about the study, including a link to the participant information sheet, research objectives, instructions and the consent form. The second section required the rating of the main themes and associated factors of the four Key Components of an AE: Strategy, Organisation, Process, and Information. Section three followed, its aim was simply to describe and illustrate the foundational Vision to Action model. While the purpose of sections four and five was to Exploratory Delphi 122 evaluate the Structural Elements Model and the Behavioural Flows Model respectively. In section six the experts were asked to rate the importance of various Enablers collated into five groups followed by the same rating requirement for Disablers in section seven and Characteristics in section eight. Evaluation of the Adaptive Zone Model was the objective of section nine and likewise, the Adaptive Enterprise Transformation Cycle was evaluated in section ten. To end, section eleven collected demographic information. Throughout the questionnaire expert participants were given the opportunity to make comments with regard to any element of the questionnaire including suggested improvements to the proposed models so as to provide observations and insights motivated by the second round. Initially, the panel consisted of 30 experts however only 28 completed the second round questionnaire of whom 14 were academic panellists and 14 industry panellists. The varied nature of the panel, together with the panellists’ wide-ranging occupational experiences, facilitated collecting diverse opinions. The questionnaire response options for the various AE factors were ordered to indicate the degree of importance attached to each (refer Appendix B4). While the questionnaire response options for evaluating the four AE models were ordered to indicate a level of agreement. This evaluation of the four models: Structural Elements Model, Behavioural Flows Model, Adaptive Zone Model and Adaptive Enterprise Transformation Cycle, together with the comments, indicated a high level of agreement that each was applicable and theoretically sound. Therefore, the AE models were not included in the third round of the Delphi study. To conclude, 112 Round Two factors and model items were assessed resulting in 48 factors not reaching a sufficient level of consensus and being included in Round Three for rerating. Round Three: Refinement of the AE Key Themes and Factors The aim of the third round of the Delphi was to rerate, in terms of importance, the AE factors that had not reached a sufficient level of consensus in the previous round. To facilitate this rerating, and similar to the other Delphi rounds, an online questionnaire was administered. However, this questionnaire not only featured the factors to be reassessed but it also included the Round Two consensus values for each i.e. mean rating and SD’s. The purpose of availing Exploratory Delphi 123 this information to the expert participants was to give them insight to the aggregated opinions of the panel to reflect on. This third round evaluation and rerating process significantly increased the final number of factors that exceeded the consensus target measures and consequently strengthened the results of the overall Delphi study. This section begins with a description of Round Three questionnaire followed by the response distributions and outline of the analysis process and results. Design of Questionnaire for Round Three The design of the questionnaire was similar to that for Round Two with two points of difference. First, it only contained the questions for rerating the importance of factors that did not reach the pre-determined level of consensus in the previous round. Second, there was additional information that explained the rerating procedure and the reason for including the relevant Round Two results. Third, the AE models were excluded since the experts indicated almost unanimous support both quantitatively and qualitatively. Given the similar design and questions to that of Round Two the questionnaire was not pilot tested. Nevertheless, it was reviewed by two academics and two practitioners familiar with questionnaire design and/ or AE who confirmed its overall applicability albeit the rephrasing of a few questions was suggested to improve clarity of meaning. Next the process for selecting the rerating factors will be explained followed by a discussion of the method used to deliver the second round feedback to the Round Three respondents. Then the final design will be outlined. 4.4.1.1 Selection of Factors for Rerating The Round Two analysis procedure, explained in section 4.3.3.4, examined 112 factors to determine the level of consensus reached among the experts. This analysis consisted of a quantitative assessment of the ratings for each factor. In addition the proposed models were subjected to a qualitative assessment to support the qualitative results. The qualitative assessment used two descriptive statistic measures for assessing each factor: the mean rating to emphasise the central tendency of the responses and the SD to indicate their spread. In addition, two recommended measures for assessing Delphi consensus (Ulschak, 1983) were Exploratory Delphi 124 applied: the first requires 51% of responses within one rating category of the five-point scale; and the second requirement of 80% of responses within two joining rating categories. All of the four aforementioned measures were applied to determine the level of consensus for each of the 112 factors resulting in 48 of those not meeting the minimum requirements. Hence, they were included in the third round of the Delphi for rerating (refer Appendix C2). 4.4.1.2 Providing Feedback to the Panellists A key strength of the Delphi method is its ability to structure and organise group communication through a series of sequential questionnaires characterised by controlled opinion feedback (Linstone & Turoff, 1975, 2002, 2011; Diamond et al., 2014). This controlled feedback, essentially, is a summary of the results from the previous questionnaire that is communication to the participants (Powell, 2003). It is used to encourage convergent thinking among expert respondents by presenting them with the aggregated opinion of the panel to contemplate (Meijering & Tobi, 2016) and is considered a useful technique for increasing group consensus. (Dalkey et al., 1969). Although controlled feedback is an important feature of the Delphi method, Meijering and Tobi (2016) purport that, “no evidence based guidelines exist on how to provide feedback” (p. 166). Resulting in Delphi studies providing different types of feedback that usually consists of summary statistics showing, the majority opinion, and or the reasoning behind the experts’ opinions (Rowe, Wright & McColl, 2005). However, researchers are divided regarding which type of feedback is the most effective. Some researchers suggest providing summary statistics only does not adequately inform the experts and recommends feedback comprising of rationales as well (Dalkey et al., 1970; Rowe et al., 1999, 2001; Bolger & Wright 2011). While, Bolger and Wright (2011) argue that providing feedback consisting solely of rationales mitigates the risk of experts being unduly influenced by the majority opinion. Lately, in terms of achieving greater levels of agreement among the experts, it was found that no difference existed between those who received rationales only and those who received summary statistics and rationales. Moreover, feedback consisting of too much information can increase the level of expert drop-offs (Meijering & Tobi, 2016). In addition, distorted feedback has the potential to mould opinions and lead to conformity rather Exploratory Delphi 125 than collect data (Cyphert & Grant, 1971; Witkin & Altschuld, 1995; Hung et al., 2008). Hence, Delphi researchers need to be, “cognizant, exercise caution, and implement the proper safe guards” (Hsu and Sandford, 2007, p. 5) when selecting appropriate types of feedback. Using summary statistics such the SD, mean and medium for both analysis, determining consensus and feedback has become common Delphi practice (Boulkedid et al., 2011). Hence, feedback consisting of summary statistics for each question constitutes an objective overview of the group opinion, which is quickly, easily and accurately interpreted (Woudenberg, 1991; Gordon, 2003). Given this is a) classic Delphi research (Hasson et al., 2000; Van Zolingen & Klaassen, 2003), b) the results of the Round Two analysis, and c) the issue of respondent fatigue, the most fitting type of feedback was deemed to be a combination of the mean rating and SD. These descriptive statistics were utilised as the opinion feedback (Linstone & Turoff, 1975) for the third round of the Delphi. 4.4.1.3 Final design for Round Three The Round Three questionnaire was divided into seven sections, including the introduction and general demographic information sections. These sections contained 12 overarching questions, 10 of which contained a total of 48 factors to be rerated. Repeating the design pattern set in the previous rounds, the first section (Introduction) outlined the objectives of the questionnaire, which were the rerating of the Round Two factors that had not reached the required level of consensus. It also provided a time estimate of 10 – 15 minutes for completion of the questionnaire, a link to the participant information sheet, and the participant consent declaration. The purpose of the second section was to enlighten the participants and contained important information specific to the current round. It first outlined the rerating procedure and explained that the questions comprised of those from Round Two that had not meet the consensus criteria. Second, the nature (Mean Rating and SD) of the feedback was described together with a request that it be taken into consideration when answering the questions. For instance, the mean rating was defined as a value that ‘indicates how the experts rated the factors on average on a five-point scale and the SD a valued that ‘indicates how much the responses fluctuated around the mean’. Exploratory Delphi 126 The main body of the questionnaire was comprised of sections three to six. These four sections were dedicated to the re-evaluation of the factors associated with: the Key Components of an AE (section three); Enablers of an AE (section four); Disablers of an AE (section five); and Characteristics of an AE (section six). A brief discussion of each section follows. Section three focused on the re-evaluation of the four Key Components of an AE i.e. strategy, organisation, process, and information (SOPI) and their associated factors. The explanation of the concept, figure and question format was unchanged from the previous round. But the number of factors to be re-evaluated was 16 of the 25 initially evaluated in Round Two of which 9 had achieved a consensus and were removed from Round Three. Section four aimed to re-evaluate the Enablers of an AE whereas the focus of section five was the Disablers. Again, in both sections the explanation, figure and question format was unchanged from the previous round. Interestingly, just 14 of the initial 31 Enabler factors evaluated in Round Two, representing 45%, were re-evaluated. While, 11 out of the initial 13 Disabler factors or 86% were re-evaluated in Round Three. A plausible reason why so few Disabler factors did not achieve a consensus in the previous round was because the wording of the questions and rating scale together might have been interpreted as two forms of negation thus making the questions ambiguous. Section six focused on the re-evaluation of the characteristic of an AE. Similar to the other sections, the explanation, figure and question format remained the same as the previous round and only 7 of the initial 24 characteristics factors had to be re-evaluated, which suggested a general understanding existed among the participants of the acknowledged characteristics of an AE. Section seven concluded the questionnaire and contained two questions that collected demographic information. Finally, a statement that thanked the participants for their valuable assistance was also. The Round Three questionnaire is provided in Appendix C2. Distribution of responses This section outlines the distribution of the responses for the third round and comments on them in comparison to those of the previous rounds. Exploratory Delphi 127 4.4.2.1 Responses Rates In line with good Delphi practice (Hasson et al., 2000; Linstone & Turoff, 1975, 2002, 2011; Hsu & Sanford, 2007) only the 28 experts who completed the second round were invited to participate in the third round. An initial email invitation was sent in addition to two email reminders over the six weeks the third round was open. The final reminder was send two weeks before the questionnaire closed and at this stage only one participant had yet to complete it, which they did before the round closed. The response rate for the third round was a 100% i.e. all 28 of the participants completed the questionnaire. Hence, the composition of the expert panel was unchanged from the second round, 14 academics and 14 industry experts participated (refer Table 4.18). Panel Sample Pool Response Count Response Rate % of Total Responses Academic 16 14 87.5% 50% Industry 14 14 100% 50% Total 30 28 93.3% 100% Table 4.18: Delphi Round Three Response Rates Consequently, the background statistics and information of the experts and the organisational headcount was exactly the same as those in Round Two and can be viewed in Table 4.7 and Table 4.8 and discussed in section 4.3.2.1. Analysis of the Responses Given that all the proposed AE models had reached the required level of consensus in Round Two the Round Three responses consisted of only the quantitative ratings for the factors. However, participants had the opportunity to leave general comments related the various factors being rated and accordingly, 46 comments were collected. An outline of the analysis process is provided below followed by a discussion of determinates for the termination of a Delphi study. The analysis process for Round Three was similar to Round Two but included an additional decision regarding the termination of the study as shown in Figure 4.8. First, the response data was downloaded and checked for errors such as duplicate entries or missing data points. Second, logical values were assigned to the rerating options and the responses ranked Exploratory Delphi 128 accordingly. Third, the statistical analysis was applied to determine if consensus had been reached for each rerated factor together with an assessment of whether agreement among the experts had increased between rounds. Figure 4.8: Outline of the Analysis Process for Delphi Round Three The Round Three analyses resulted in 42 of 48 rerated factors, or 87.5%, achieving a sufficient level of consensus. Of the 6 factors that did not achieve consensus, shown in Table 4.19, 1 was from the Strategy key component, 3 were Enablers, 1 was a Disabler and 1 was a Characteristic. Factor nos. 1 - 3 confirmed the validity of the underlying premise of this research that the adaptive approach is neither a deliberate nor emergent approach. It is an approach that lies between two ends of the deliberate/emergent continuum and the optimal position is strongly influenced by context. Factors 1 and 2 could be considered mechanisms that reinforce suboptimal deliberate AE behaviours whereas factor 3 encourages sub-optimal emergent behaviour. This conclusion is also supported by the experts’ comments linked to these factors (Table 4.19). Factor 4 (Enabler) focused on the alignment of strategy, technology, and BP to effect change. This concept is well supported in the literature (Tallon, 2008) and seemingly by the experts. However, it appears the experts interpreted this factor a number of ways given their comments and the lack of a sufficient level of consensus. Each comments is valid and suggests that some Exploratory Delphi 129 experts may have interpreted the factor as being mutually exclusive in terms of either incremental change or radical change. Alternatively, the experts may have wanted to emphasise the point that the alignment of technology can bring about incremental and sudden change, which was validated as a Characteristic of an AE (refer Appendix C3). No. Theme Factor Experts Comments 1 Strategy Has ONLY a proactive approach to strategy development that anticipates environmental changes and potentially leads the market in innovation and practices. “What I would rather say it that it is quite important that they DON'T have ONLY a proactive or reactive approach.” “E.g. it may be important to have a reactive strategy, but disastrous to have ONLY a reactive approach.” “Any organisation must be able to lead changes (proactive) and react to feedback (reactive)” 2 Enabler Strategic objectives are enabled by financial and managerial sign offs. "financial and managerial sign offs" may well help ensure strategy objectives are met, but on the face of it, they are contrary to the flexibility and autonomy that would be most conducive to adaptively.” 3 Enabler Encouragement for risk-taking behaviour in new ventures. “To take informed risks is the opportunity for greatness.” 4 Enabler Bring about incremental change through realignment of strategy, resources, processes, and technologies. “It is critical that investment in technology is prioritised in line with the organisations strategy. Digital technologies… influence the pace of change.” “An AE shouldn't need incremental change, but should be able to effect (and handle) more radical change.” “Enterprise Architecture link and align Business (architecture) and IT to strategy i.e. EA is an enabler. EA is about what and how to change.” 5 Disabler Lack of influence over external environment factors. 6 Characteristic Possesses capabilities and core competencies that are unique. "Uniqueness" can be overstated. e.g. why should you do things differently? There may be good reasons not to deviate from business norms. Consider what should make you different (it should generally be technology, craftsmanship, or people or a sensible combination). “ Table 4.19: Delphi Round Three Rerated Factors that Did Not Achieve Consensus Factor 5 (Disabler) suggested that an AE lack of influence over its external environment inhibits its ability to adapt. This suggestion is based on the notion that an AE is a ‘market Exploratory Delphi 130 follower’ rather than a ‘market leader’ and wasn’t accepted by the experts since it did not achieve a sufficient level of consensus. This result is consistent with the result for Factor 1 (Table 7:18). Given that the experts did not offer comments to support their ratings, a probable explanation is that an AE is both a ‘market leader’ and ‘market follower’, which was supported by the overall Delphi findings (refer Appendix C3). Factor 6 (Characteristic) proposed that an AE should possess unique capabilities and core competencies. Again, the experts did not reach the required level of consensus with regard to this factor. Based on the expert’s comment and current literature Haeckel (1995, 2004, 2016) and Gothelt and Seiden (2017), it could be concluded that an AE competencies may not be unique rather they are the foundation for the acquisition of capabilities through participation in business networks (refer Appendix C3). Lastly, the results of the second and third rounds were combined to evaluate the overall level of agreement among the experts and determine whether the Delphi should be terminated taking into account the recommendations discussed in section 4.4.3.1. The findings of the Delphi study were confirmed and the final results are presented in sections 6.1 - 6.7. 4.4.3.1 Termination of the Delphi Study Theoretically, there can be any number of Delphi rounds before it is terminated. For most, Delphi, termination is when consensus has been achieved (Diamond et.al 2014) since consensus is integral to the method (Hsu & Sandford, 2007) and consequently used to determine the specific number of rounds. Early Delphi examples tended to be comprised of at least four rounds but more recently, two or three rounds is considered acceptable (Rowe & Wright, 2001; Diamond et al., 2014). There is no optimal number of rounds before termination, however Rowe and Wright (2001) suggest three structured rounds is sufficient because up to this point the accuracy of the expert’s estimates increases and then decreases thereafter. At best, there is a trade-off between termination after a few rounds, which could jeopardise the unearthing of the most important issues, and a lengthy study that induces survey fatigue (Schmidt, 1997; Day & Bobeva, 2005). When a Delphi study is prolonged, panellists may shift their opinions to align with the feedback provided resulting in agreement due to growing fatigue (Mitchell, 1991; Rowe & Wright 2001). Exploratory Delphi 131 Alternatively, a more practical reason such as lack of progress can influence the termination of the Delphi. For instance, the marginal improvement in the level of agreement among the experts between subsequent rounds is minimal. The trade-off between the viability of conducting another round, such as the indulgence of panellists, available resources, and timelines, versus potential further gains must be considered (Schmidt, 1997). Consequently, a three round Delphi has become a popular research technique (Landeta, 2006; Diamond et al., 2014) and is generally recommended in the literature (Mitchell, 1991; Linstone & Turoff, 2002, 2011). Moreover, in the IS domain the three round Delphi has been used successfully in many research contexts (Okoli & Pawlowski, 2004; Skulmoski et al., 2007) as a research tool appeals to both researchers and participants because it is an efficient collaborative mechanism for the generation and percolation of important insights (Okoli & Pawlowski, 2004). Given the aforementioned advice and the result of the third round analysis, which confirmed a consensus had been achieved for 88% of the rerated factors, the AE Delphi study was terminated after Round Three. Summary of Round Three The third round of the Delphi aimed to rerate the factors that did not reach a consensus in Round Two. The structure of the questionnaire was similar to Round Two albeit models were not included as all the AE models had reached a sufficient level of consensus in the previous round. In addition, feedback in the form of Round Two results were included to help the experts come to an agreement together with instructions that explained the reason for this feature. The Round Three questionnaire can be reviewed in Appendix C2. The response rate for Round Three was 100%. The questionnaire was completed by all 28 of the experts who also participated in Round Two and represented an equal distribution from the two primary panels being the academic and industry panels. These experts re-evaluated and rerated 48 factors and the subsequent statistical analysis revealed a sufficient level of consensus was reached for 42 of them. This indicated that agreement had been reached by the panelists on 96% of the rerated factors therefore the Delphi study could be confidently terminated. Exploratory Delphi 132 Summary of the Delphi Study The Delphi study was designed as an exploratory and interpretive method of inquiry to fulfil the requirements of the exploratory phase of the research. The objective of the Delphi was to identify and evaluate the factors, structures, behaviors, and processes of an AE using data gathered from a panel of experts. The first Delphi round used four open-ended questions, and an invitation to contribute observations, to gather insights from the expert panel (refer Appendix A5). Thematic analysis techniques were used to condense the raw data and form nuggets without the loss of information. These nuggets were the bases of a bottom-up synthesis that resulted in the development of categories, themes, overarching themes, structures, propositions, and ultimately, incipient models of an AE (refer Figure 4.3) . The second round provided the experts with the results of the Round One analysis, which consisted of 112 AE factors (derived from the nuggets), arranged according to meaningful themes, and four proposed models of an AE (refer Appendix B3). The aim of the second round was to enable the rating of the AE factors according to importance, as well as validating and refining the models. The rating process resulted in 48 factors not reaching a sufficient level of consensus and being included in the next round for rerating. The experts reached the consensus that all the proposed models were valid. Hence, the models were not included in the next round. The purpose of the third, and final, round was to re-evaluate and rerate the 48 factors. The experts were provided with the Round Two consensus values for each of the factors to be rerated as feedback information (refer Appendix C2). Of the 48 factors that were rerated, 42 or 96% reached a sufficient level of consensus and the Delphi was terminated. The Delphi study fulfilled the research requirements outlined in section 2.7.2 by establishing a coherent understanding of AE through the identification of important factors, structures, behaviours and processes as well as facilitating the creation of several relevant static and dynamic AE models. It also laid the foundations for the following explanatory phase of the research, specifically a survey to refine, validate and explain the key concepts and models. Explanatory Survey 133 5 Explanatory Survey This chapter describes the explanatory phase of the multi-methodological approach that was adopted and which completes the research sequence of exploration and explanation (Kaplan & Duchon, 1988; Creswell & Clarke, 2007). Whereas the preceding chapter focused on the exploratory part of the research, the Delphi, this chapter fulfils the explanatory requirements of the study with a discussion about the survey that was conducted. The survey was a continuation of the research effort and completes the overall research design requirements through further refinement and validation of the AE concepts and models that emerged from the Delphi study. Guided by the Delphi findings, the Behavioural Flows Model and the Adaptive Zone Model were selected for further refinement and validation. The reason that these two models are selected is that they are foundational models for the research. They represent the key components, structures, and behaviours of an AE and, as such, are the primary elements of the Adaptive Enterprise Transformation Cycle. Therefore, further evaluation and validation of the two selected models was deemed critical as these particular models set the objectives for the survey. To achieve the objectives, the survey was designed to use the analytical techniques of factor analysis by the use of appropriate tools. The structure of the survey is outlined in Figure 5.1. The survey process began with formulation of hypotheses and the construction of hypothesised causal relationship models that provided an illustration of predicted relationships of AE constructs that were being tested. The hypothesis models underpinned the survey design by clarifying and determining the survey objectives and data requirements. The collected data would confirm or contradict the hypothesised relationships so an explanation could then be provided. Prior to being administered the survey instrument was first pilot tested and then refined and finalised to ensure its applicability. Administration of the survey was online to a predefined number of employees, in a variety of positions, in business organisations in the United States of America (USA). Analysis of the collected data involved several statistical techniques; descriptive statistics to describe the data (refer sections 5.4.4 and 5.4.5); EFA (refer section 5.5.2) and CFA (refer section 5.6) to analyse factors; as well as Structural Explanatory Survey 134 Equation Modelling (SEM) (refer sections 5.7 and 5.8). The SEM latent variable regression standard estimates were used to test the hypothesis (refer section 5.9). Figure 5.1: Structure of the Survey Chapter Conceptual Foundations Section 5.1 focuses on the formulation of the hypotheses that were tested using the survey instrument. The hypotheses were proposed to facilitate further investigation, as well as validation, of the AE concepts and models. These concepts and models were the result of the exploratory phase of the research that generated several propositions about key components, structures, and behaviours of an AE and their relationship to each other. As previously mentioned, the scope of the survey limited the number of concepts and models that were able to be validated. Hence, only two models were selected i.e. the Behavioural Flows Model and the Adaptive Zone Model, shown in sections 6.4 and 6.5, as these particular models were Explanatory Survey 135 considered to be the foundation of the overall study in the attempt to explain the AE phenomenon. Not only do these two models portray a critical part of the Adaptive Enterprise Transformation Cycle (refer Figure 6.7), but also conceptualise the key paradigms of the research and illustrate the fundamental AE concepts that emerged from the Delphi study. They are the key components of an AE, the behavioural flows of an AE, and the adaptive approach that was found to be the optimum balance of deliberate and emergent. In the survey the order in which the models were validated was the reverse of the order in which they were presented in the Delphi survey. Specifically, the Adaptive Zone Model preceded the Behavioural Flows Model as the key components of an AE are a primary element of the Adaptive Zone Model, and are, therefore, fundamental to the research and the other AE models. By presenting it first, the Adaptive Zone Model allowed for the simultaneous validation of both the key components and the adaptive zone concept. The key components are also reflected in the Behavioural Flows Model so the initial testing of this particular concept was important. To test and validate each of the AE models some theoretical propositions were devised about the key components and behavioural flows. The propositions allowed causal relationship models, which define constructs and the significant relationships among them, to be developed, and hypotheses to be formulated that were then subjected to testing. The Adaptive Zone and Behavioural Flows Models’ associated hypothesis are presented in Table 5.1. Hypothesis Models and Associated Hypotheses The hypothesis models shown in Figure 5.2 and Figure 5.3 are a representation of the theoretical propositions and include the constructs and relationships from which hypotheses were formulated. For the sake of simplicity, these causal relationship hypothesis models will be referred to as the ‘hypothesis models’. Figure 5.2 illustrates the two Hypothesis Models of the Adaptive Zone. The Model 1 hypotheses theorise that there is a positive relationship between each of the following constructs: Balanced Strategy, Flexible Organisation, Loosely Coupled Process and Dynamic Information, and Adaptive Enterprise (H1 – H4) and that, together, form an Interwoven Structure. The Model 2 hypothesis theorises that there is a positive relationship between Interwoven Structure and an Adaptive Enterprise (H5). Explanatory Survey 136 The theoretical propositions suggest that when the key components of an AE are optimised (optimal region between deliberate and emergent) they each enable an enterprise to be adaptive. They also suggest that when the optimised key components are interwoven the enterprise is adaptive. Figure 5.2: Hypothesis Models of the Adaptive Zone The two Hypothesis Models of the Behavioural Flows are shown in Figure 5.3. Model 1 hypotheses theorise that there is a positive relationship between each of the constructs: Blended Information Flows, Blended Learning Flows and Blended Control Flows, and Adaptive Enterprise (H6 – H8) and that, together, form Interwoven Behaviour. Figure 5.3: Hypothesis Models of the Behavioural Flows Explanatory Survey 137 Model 2 hypothesis theorises that there is a positive relationship between Interwoven Behaviour and an Adaptive Enterprise (H9). The theoretical propositions suggest that when behavioural flows are blended they each enable an enterprise to adapt. They also suggest that when the blended behavioural flows are interwoven the enterprise is adaptive. Formulated Hypotheses The hypothesis models generated nine hypotheses which were formulated and then tested using the survey instrument. Five of the hypotheses, H1 - H5, were generated from the Hypothesis Model of the Adaptive Zone (refer Figure 5.2) and the remaining four, H6 - H9, were generated from the Hypothesis Model of the Behavioural Flows (refer Figure 5.3). Formulating an hypothesis involves also phrasing a Null hypothesis as criteria has to be established for the hypothesis to be tested (Moskowitz & Wright, 1985) because it is the Null hypothesis that specifies that the predicted effects do not exist. Based on the findings from the Delphi survey, it was proposed that for each of the four key components (SOPI) there is an optimal region on the deliberate emergent continuum that enables an enterprise to be adaptive. This optimal region is akin to a zone that represents the appropriate deliberate-emergent approach (adaptive approach) for that particular key component, given the enterprise context. To test these theoretical relationships between the four components, and their optimal region on the deliberate emergent continuum, Hypotheses H1 – H5 were formulated. It was also proposed that an additional relationship exists, which is a positive correlation between an interwoven structure that is comprised of the four key elements, and an enterprise’s ability to be adaptive. This proposition is reflected in H5. In addition, and based on the Delphi findings, it is proposed that three behavioural flows, when blended in terms of deliberate and emergent, enable an enterprise to be adaptive. Hypotheses H6 – H8 were formulated to test these relationships. In addition, H9 was formulated to test whether there is a positive relationship between the interwoven blended behaviour flows and the adaptive ability of an enterprise. Table 5.1 presents the nine hypotheses and the associated Null hypotheses that were to be tested using the empirical data collected by the survey instrument. Explanatory Survey 138 Survey Design Although good survey design is challenging, applying a systematic approach to the design process helps mitigate many of the challenges. The overriding objective of good survey design is to develop a survey instrument that is methodologically sound and thereby provides the appropriate empirical data necessary to test the proposed hypotheses. The first step to achieving this is to identify the research objectives because it is these objectives that determine the questions that need to be asked (Brace, 2008). Developing a methodologically sound survey involves careful planning, informed design, and appropriate administration (Fowler, 2014). No. Hypotheses Null Hypothesis H1 A balanced strategy enables an enterprise to be adaptive A balanced strategy prevents an enterprise from being adaptive H2 A flexible organisation enables an enterprise to be adaptive A flexible organisation prevents an enterprise from being adaptive H3 Loosely coupled processes enable an enterprise to be adaptive Loosely coupled processes prevent an enterprise from being adaptive H4 Dynamic information enables an enterprise to be adaptive Dynamic information prevents an enterprise from being adaptive H5 An interwoven structure enables an enterprise to be adaptive An interwoven structure prevents an enterprise from being adaptive H6 Blended information flows enable an enterprise to be adaptive Blended information flows prevent an enterprise from being adaptive H7 Blended learning flows enable an enterprise to be adaptive Blended learning flows prevent an enterprise from being adaptive H8 Blended control flows enable an enterprise to be adaptive Blended control flows prevent an enterprise from being adaptive H9 Interwoven behaviour enables an enterprise to be adaptive Interwoven behaviour prevents an enterprise from being adaptive Table 5.1: AE Hypotheses Tested in the Survey This section explains the objectives of the survey, describes recommended design features for surveys conducted in a business context, and then the planning, design and administration of the AE survey. Next, the development of the survey is discussed and reasons given for making particular design choices. Following this discussion, the pilot test process is described and outlines the design changes that were made to the final survey based on feedback from the testers. To conclude this section, the final survey design is presented. Explanatory Survey 139 Survey Objectives The broader objective of the survey was to further validate the research artefacts that resulted from the findings of the Delphi study, namely, the AE concepts, models, and hypothesis, and by doing so provide empirically tested answers to the overall research questions (section 3.1). There are, however, a number of ways to answer these questions in depth and, as Farrugia et al., (2010) suggests, it can take more than a single study to understand the wider subject area. In regard to this research, the Delphi survey generated several research artefacts and it is acknowledged that it is beyond the scope of this study to validate all of them. Therefore, this survey forms the second part of a single study and the objective is to evaluate the attributes of an AE through the validation of two AE models. Design Features Most surveys aim for a ‘comparative’ and ‘representative’ view of a particular population so tailoring a survey for a particular population is an important design principle (Gillham, 2008). Design decisions are influenced by the challenges associated with the target audience (Dillman, 2007; Fowler, 2014) and in the context of this research, the online survey was designed for a business population and targeted employees at different functional levels of business organisations. Hence, six design principles advocated by Dillman (2007) for self-administered, business surveys were applied. Each of the design principles will be outlined in the following discussion with an explanation of how they influenced the design of the survey about the development of an AE. Principle One: Identify the most appropriate respondent for a business survey and develop multiple ways of contacting them. The objective of the survey was to evaluate the structures and behaviours of an AE and gain insight into the AE phenomenon. Hence, two criteria were set for participation in the survey to ensure respondents were part of the targeted sample group. The first criterion was that the respondent had to be working in a business organisation. The second criterion was the organisation had to be located in the USA or to be a subsidiary of a USA based company. To ensure that a ’comparative and representative’ sample was reflective of the population no limit Explanatory Survey 140 was set in regard to either the size of the organisation or type of industry it operates in. Dillman (2007) recommends using the internet to administer surveys where data will be collected in only one country. Therefore, the survey was made available online in order to contact and be easily accessible at any time to potential respondents. Principle Two: Develop respondent-friendly business surveys. According to Dillman (2007), business surveys often have serious design flaws as “they are made as short and precise as possible, rather than as queries to be read, fully comprehended, and thoughtfully answered” (p. 342). Also, in the effort to minimise the number of pages, questions are frequently crowded which can make navigating the survey less clear. These issues were avoided in the AE survey through careful design choices. First, a set of definitions were provided to clarify the terms used in the survey and to assist in the comprehension of the questions. Second, diagrams and short descriptions were used to illustrate the concepts rather than through the use of lengthy explanations. Third, the survey was divided into sections with each section relating to an overarching theme, within which the questions were grouped as themes which would help respondents thought processes. Fourth, there were well defined breaks and spacing between each question and answer options to allow for clear navigation paths. Finally, the sentences were concise, direct, and simply phrased to avoid ambiguity and to keep the survey short. Principle Three: Provide instruction within the survey rather than in a separate instruction booklet. Instructions are most effective when they are provided at the point at which they are required (Dillman, 2007). Clear instructions were provided at the beginning of each section and, if considered necessary, they were also integrated within groups of questions. The instructions were concise and explained how to approach the questions. They often defined the scope of the question to help respondents be clear about what was required. For example, an instruction might direct respondents to answer the questions from the perspective of their particular role in the organisation. Principle Four: Conduct cognitive interviews to help tailor the survey to people’s ability to respond and to gather intelligence information for targeting its delivery and retrieval. Explanatory Survey 141 Cognitive interviews are needed to help mould, refine and test both the questions and layout to ensure that the respondent can understand what is required Dillman (2007). To fulfil these requirements, a series of pilot tests were conducted with survey design specialists as well as potential respondents. First, the initial design was reviewed by survey specialists and their feedback was incorporated into a revised version of the survey. The revised version was then pilot tested twice more using a selected group of respondents who had similar characteristics to those identified as potential survey participants. After each pilot test, one-on-one interviews were conducted with the pilot testers to discuss their feedback. This feedback, if considered appropriate, was then used to revise the survey in preparation for the next round of testing. Principle Five: Target communications to gate keepers where appropriate. Depending on the context of the business survey, direct access to potential respondents might not be possible or practical so indirect access can be facilitated by a third party, also known as a gate keeper, although this is not necessary with self-administered and online surveys. This applied to the AE survey as it was both self-administered and online, with open access to all potential respondents. The respondents were filtered through the use of screening questions at the beginning of the survey which ensured that those who did participate had the required respondent characteristics (Tourangeau, Conrad & Couper, 2013; Schoenherr, Ellram & Tate, 2015). Principle Six: Consider tailoring of correspondence and the survey to subgroups of a population. Generally, tailoring the invitation and survey for different groups within a sample can improve the response rate. However, for large-scale surveys this is almost impossible to do and can result in issues of comparability when questions are even slightly tailored for particular groups within the same sample frame (Dillman, 2007). It was decided that tailoring was neither practical nor necessary as the focus of the survey was not on individual groups but rather on business organisations in general from one country. Also, the survey design incorporated all the aforementioned principles to ensure that a ’comparative and representative’ view of the identified/chosen population would be reflected in the results (Gillham, 2008). Explanatory Survey 142 Survey Question Design Notwithstanding the recommendations of Dillman (2007), Fowler (2014) suggest that it is the quality of the question design that has significant influence on the overall quality of the survey results, as the question is a measure that has “… a predictive relationship to facts or subjective states that are of interest” (p. 75). This means, the result of a well-designed questionnaire are answers that are good measures, as one is the basis for the other. This is highlighted by Bradburn et al. (2004) who conclude that the design of survey questions can result in a major source of error in survey estimates. To mitigate these issues and ensure good question design Iarossi (2006) suggests there are two rules that should be considered when designing survey questions. These are ‘relevancy’ and ‘accuracy’. To design a question so that it’s relevant the designer needs to clearly understand the question’s objective, and to be familiar with the type of information that it is required to elicit. To design a question for accuracy the wording, style, type, and sequence of questions should trigger a correct response. These rules are supported by Fowler and Cosenza (2009) as their design advice is to use questions that are consistently understood and provide a suitable way for the respondent to tell what they have to say. Furthermore, Fowler (2014), Iarossi (2006) and Gillham (2008) concur that although the ultimate aim of good survey and question design is to yield meaningful and accurate answers, it is just as important to motivate the respondents so that they are willing to participate and engage with the questions. As well as the aforementioned requirements, other question design considerations were taken into account: the use of brief and relatively simple sentences (Gillham, 2008); the use of terms that have a common meaning (Fowler, 1995); questions that measure only one thing (Gillham, 2008); provision of frames of reference to avoid misinterpretation/ambiguity (Converse & Presser, 1986). During development it was important to consider the broad context of the overall AE survey, which was the idea/concept of a business enterprise/organisation being adaptive. This context presented advantages and disadvantages. For example, an advantage was that respondents would have a general, objective understanding of what a business organisation is, but a Explanatory Survey 143 disadvantage could be that the notion of an organisation being adaptive is a subjective measure that is influenced by the organisation’s operating environment. The survey was targeted at those people currently working for organisations in the USA so it was important that the language used in the questions was tailored to fit the respondent’s frame of reference. For instance, those words, expressions and terms that might have a different meaning, or be meaningless in the USA context, were avoided. Generally, simple language using commonly understood words was used but, when required, USA specific terms were applied. Since the survey was based on the results of the Delphi study, words and terms that had caused confusion or had the potential to do so e.g. academic jargon, were either rephrased or avoided. The length of the survey and sequence of the questions were given special attention as a survey should be as short as possible and not exceed six pages (Gillham, 2008). In terms of the question sequence, the navigation paths were easy to follow and helped to maintain the respondent’s interest until the end (Iarossi, 2006). This was necessary because, as previously mentioned, the quality of the survey data is positively correlated with respondent engagement (Fowler, 2014; Gillham, 2008). Furthermore, the layout of the survey supported respondent understanding of the questions and thus made it interesting and relatively easy to complete (Couper, Traugott & Lamias, 2001). The first page contained a set of simple definitions for the terms used in the survey. Next demographic details were collected, followed by a series of grouped questions. As they progressed through the survey the questions became more specific in regard to the respondent’s particular role in the organisation. The questions also became more reflective because they asked about less explicit concepts. In summary, the survey design process required choices to be made in order to design succinct and reasonably simple questions that would motivate respondents to participate and maintain their interest throughout the questionnaire. It was important to design a survey instrument that would yield accurate data that would be able to give insight into an AE. In order to mitigate the risk through the choices and trade-offs that were made, the survey was subjected to pilot testing. There were two pilot tests with a group of testers who had similar attributes Explanatory Survey 144 to the final target group. Evaluating the survey through pilot testing was central to the design process (Gillham, 2008; Presser et al., 2004; Dillman, 2007) and identified areas for improvement. Question Content Equally important to the question design considerations is the content of the questions. The questions need to reflect the topics that are being asked about (Gillham, 2008) and, for this survey, the questions had to generate answers that could be transposed into data to evaluate and validate AE key components, structures, and behaviours through hypotheses testing. Nine hypotheses (refer Table 5.1) were derived from the two hypothesis models (refer Figure 5.2 and Figure 5.3). The Hypothesis Model of the Adaptive Zone was conceptualised as six constructs: Balanced Strategy, Flexible Organisation, Loosely Coupled Process, Dynamic Information, Interwoven Structure, and Adaptive Enterprise, as well as the hypothesised relationships among them. The Hypothesis Model of the Behavioural Flows was conceptualised as five constructs: Blended Information Flows, Blended Learning Flows, Blended Control Flows, Interwoven Structure, and Adaptive Enterprise, as well as the hypothesised relationships among them. Given the complex nature of the concepts that each of these constructs represent, they were categorised as multi-dimensional constructs. A multi-dimensional construct can be described as “holistic representations of complex phenomena” (Edwards, 2001 p. 145) and can be utilised to represent a number of distinct dimensions as a single theoretical concept. In recent times, the use of multi-dimensional constructs to relay complex ideas in IS research has become more prevalent and several papers have been published that have used multi-dimensional constructs for empirical investigations. However, there is a lack of consistency in the way these complex constructs have been conceptualised and operationalised in applied settings (Wright et al., 2012). Furthermore, the theoretical propositions of this research are not only multidimensional but also multi-disciplinary in nature, and theorise interwoven concepts from the three disciplines of Management, OM, and IS. Explanatory Survey 145 Multi-dimensional Measurement Scales To enable the testing of the hypotheses using factor analysis techniques, the ten constructs had to be conceptualised in terms of their dimensions so they could be operationalised (Edwards, 2001; Wright, et al., 2012). To facilitate this, established measurement scales would be need to be applied, but after an extensive review of the literature it seems that suitable scales, for operationalising the model constructs from an integrated and interwoven perspective, have yet to be developed. A search of the literature identified several scales that have been used to measure particular dimensions of each of the constructs. For example, measurement scales have been developed in the management discipline to test certain dimensions of strategy and organisational behaviour. Similarly, the OM literature revealed that there are scales to measure certain dimensions of BP, and the same for the discipline of IS. However, in the IS literature there is scant agreement about the appropriate way to operationalise higher-order constructs (Wright et al., 2012). Consequently, there is a paucity of research that proposes tested scales to measure the constructs of strategy, organisation, process, and information from an integrated, multidimensional perspective (Wright et al., 2012). In addition, there is a paucity of measurement instruments to operationalise and test the adaptive concept as it is defined in this research. The literature reveals that there are measurement scales that can be used to test certain dimensions of a deliberate approach but only a few to test an emergent approach, with even fewer to test the adaptive approach. To date, there is a lack of empirical research that investigates interaction among interwoven key components of an AE and the interwoven behavioural flows of an AE, using the deliberateemergent approach to enable an enterprise to be adaptive. Also, there is no evidence that these particular multi-dimensional, integrated constructs, as illustrated in Figure 5.4, have been operationalised in prior research. The added aspect of a multi-disciplinary perspective, with integrated theories and concepts from several disciplines, offers a unique view of an AE. Hence this research, including the concepts, models and hypotheses being proposed and validated, is theory in the early stages of development. Explanatory Survey 146 Due to the absence of available, tested scales for measuring the constructs, methods to measure the underlying constructs have to be created. Rather than just following a particular procedure, it is important to ensure that the dimensions of each construct are well represented in the initial item/question pool. The objective is to cover content relevant to the target construct and tangential to the core construct (Clark and Watson, 1995) because weak questions can be dropped but insufficient coverage could result in not detecting content and the underrepresentation of meaningful dimensions. However, the “measures should be embedded in a theoretical framework” and are subject to the “theoretical perspective of the developer” (p. 312). Figure 5.4: The Multi-Dimensional Constructs of the Survey Research It is also important to write ‘good’ questions that use plain language and avoid affect-laden terms and ambiguity (Clark & Watson, 1995; Hinkin, Tracey & Enz, 1997). Following the initial scale-development advice of Clark and Watson (1995), and to facilitate the creation of good questions that are broadly representative of the constructs being measured, a two stage procedure was followed. First, suitable factors that had been identified in the Delphi study were used. Second, relevant theory and concepts from the literature were considered and appropriate factors selected. In both instances, the selection was based on the particular factors Explanatory Survey 147 being able to measure both the constructs and their relationships in the hypothesis models (refer Figure 5.5 and Figure 5.6). To measure the six constructs of the two Hypothesis Models of the Adaptive Zone, including the AE construct, twenty seven questions were created. Twenty three of the questions were sourced from the factors identified in the Delphi study with the remaining four questions being sourced from the literature (Tallon, 2008). Figure 5.5: Hypothesis Models of the Adaptive Zone and the Construct Measures To measure the four constructs of the Hypothesis Models of the Behavioural Flows, twenty questions were created (refer Figure 5.6). Six of these questions were sourced from factors identified in the Delphi study, with the remaining fourteen questions generated from two relevant frameworks in the seminal literature i.e. the Adaptive Loop (Haeckel, 1995, 2016) and Components of Action Time (Hackathorn, 2004). Both these frameworks accurately reflect the underlying dimensions of the Behavioural Flows constructs and so informed the questions to measure the constructs of the model. Haeckel’s (1995, 2016) Adaptive Loop conceptualises the adaptive paradigm of sense, interpret, and respond, while Hackathorn’s (2004) framework is based on an adaptive system that is intelligent, highly responsive and continually redesigned. Overall, forty seven questions were developed for the survey to collect data that could be used to measure the ten constructs of the hypothesis models. Each of the questions was created to assess the constructs under examination (refer Appendix D3). Explanatory Survey 148 Figure 5.6: Hypothesis Model of the Behavioural Flows and the Construct Measures Question Scales Likert scales are the most commonly used rating method employed for survey instruments (Allen & Seaman, 2007). Typically, respondents are asked to rank items based on their attitude or opinion by using a measurement response category. There are numerous ways to build a Likert scale but it should include at least five choice points (Gliem & Gliem, 2003; Allen & Seaman 2007) although they can range from a three-point scale to a twenty-point scale. The most commonly used in business research are a five or seven-point scale that usually include an equal number of ’favourable and unfavourable’ or ’least to most’ response choices to a statement or question about the subject being examined (Gliem & Gliem, 2003). In this survey, respondents were asked to rank statements on a seven-point Likert scale with the ‘least to most’ response choices being phrased in three different ways to ensure the survey statements were adequately assessed. Respondents were asked to rank forty-seven statements. Twenty nine of the statements offered a choice of 7 responses ranging from Strongly Disagree to Strongly Agree. Similarly, six statements offered a Very Long Time and Very Short Time scale. The seven-point scales for the remaining twelve statements were asked in two different ways to ensure that they accurately reflected the dimensions of the constructs to be measured. The first way was through the use of a continuum with 7 choices that related to the subject being ranked and ranged from unfavourable to favourable e.g. “The organisation is: Uncollaborative and Collaborative”. The second way was through the use of a mid-point option with the use of the Explanatory Survey 149 word “hybrid”, which represented a balance between two opposing points on the seven-point continuum. This mid-point option was not a ‘no opinion’ choice but rather supported the deliberate-emergent concept that is an important dimension of each of the hypothesis model constructs, and fundamental to the overall research. Because it was assumed that respondents would have an opinion about the organisation they worked for, non-response options were not included in the survey. This helped when analysing the collected rating data because it meant that all the response values were meaningful after the ratings had been converted to numeric values. Pilot Testing Adhering to the survey design principles advocated by Fowler (2014), Dillman (2007), and Gillham (2008), two pilot tests were conducted to ensure the survey design was applicable. The first pilot test was conducted with survey experts to test the overall design, the phasing of the questions, and the layout to ensure that the survey met its objectives. After incorporating the experts’ feedback into the survey, a second pilot test was conducted with potential respondents i.e. employees of business organisations. The purpose of the second pilot test was to eliminate any issues of content, ambiguity or misunderstanding and to evaluate the overall design in the form it was to be administered online. 5.2.7.1 Pilot Test Survey Experts As mentioned, the primary purpose of the first pilot test was to evaluate the overall survey design and its ability to generate the measurement data to test the hypotheses. To achieve this, two experts were chosen who had an academic background in management as well as experience in survey design. These design experts were given the draft survey and a set of review instructions (refer Appendix D2) that explained the survey objectives and testing requirements, which included an assessment of the survey structure, layout, and instructions to the respondents. The hypothesis models and hypotheses were also provided to ensure that the testers were well informed so they could assess whether the survey would achieve its purpose. The testers identified several areas for improvement which are listed below: Explanatory Survey 150 • Definitions to go on one page at the beginning of the survey with an interactive link to that page in each of the sections. • Eliminate conjoint questions that contain more than one subject, create one question per subject. • Avoid asking complicated questions through simplifying the language. • Response scales should be consistent e.g. unfavourable to favourable. • Additional instructions to be included to highlight the change in perspective for the section/questions. • Include two additional demographic questions to collect gender and age information. • Improve formatting and layout: more space between questions; use of bold to highlight key words; page breaks at the end of each section. • Correct grammatical and spelling errors. The first pilot resulted in a number of improvements being made to the draft survey by making the changes that had been suggested. The testers’ feedback emphasised that the overall survey design was good although they suggested that it could be improved through further simplification. Questions considered to be complex were reworded or, if possible, divided into separate questions that referred to only one subject. Additional instructions were included to indicate a change in perspective for the different sections of the survey. 5.2.7.2 Pilot Test with Potential Respondents The reiterated survey was tested with four respondents who were representative of those in the target population. These pilot testers were currently employed by a business organisation and each worked at a different organisational level i.e. one senior manager, one middle manager and two operational level employees. The purpose of conducting the second pilot was to ensure all aspects of the survey were ‘user friendly’ and to test its online application e.g. the background information, instructions, question and answer possibilities, the survey structure, and to assess the time required to complete the questionnaire. The pilot testers received the same information as the future potential respondents to ensure that the test conditions replicated those of the final survey. They were also given the test instructions and a feedback Explanatory Survey 151 sheet, both of which were similar to those used in the first pilot test. All four testers completed the survey online and then provided written feedback. The testers identified a few areas for improvement that included minor rewording and an issue with the online survey defaults. The testers provided the following feedback for consideration: • Minor rewording adjustments such as using nouns instead of pronouns e.g. “the word ‘it’ should be changed to enterprise/business.” • Provide an N/34A option for questions that might be irrelevant to the respondent’s organisation. • The questions do not seem to take into account the scope and context of the subject i.e. high impact decisions verses low impact decisions. • The “must answer all questions” reminder, pop-up window did not appear after each page. • The 15 minute time estimate to complete the survey is appropriate. Consideration was given to all the feedback and resulted in minor design adjustments being made to the survey. Again, the wording of questions was reviewed and minor adjustments made to eliminate ambiguity. The suggestion of including an N/A option was reconsidered as it had been considered and rejected in the initial design phase. Again, it was decided to not include this option because each of the question subjects were well recognised in the literature, and the Delphi study identified them as important aspects of an AE. For those questions that could possibly include a N/A option, respondents were asked instead to give their opinion based on their experience and perception of their organisation, which would enable them to be able to answer the question. One of the testers gave feedback about a question but it was realised that they had not read, or had perhaps ignored, the instructions because respondents were specifically instructed to answer the survey questions from the perspective of their organisation. Instructions were subsequently prefaced with the words “Please Note” in bold capitals to mitigate the problem of respondents not reading, or perhaps ignoring instructions related to the questions. As well as these improvements, the fault in the reminder pop-up window was corrected so that it appeared at the end of each page. Explanatory Survey 152 Final Design of the AE Survey The final design of the AE survey consisted of seven sections with each section designed to elicit information about a specific topic that was briefly described at the beginning of the section (refer Appendix D3). The survey sections are outlined: Introduction – The first section introduced the research topic and outlined the focus and objectives of the survey. It also described the structure of the survey, the procedure for answering the survey questions, and the approximate length of time it would take to complete. A link to the Participant Information Sheet was provided. The respondent was first required to agree to the statement of consent and then answer a filter question to be able to proceed with answering the questions. Definition Page - This section included a set of definitions of terms used in the survey. The respondents were asked to read the definitions to clarify the terms used in the various sections and to refer to them, as required, when answering the questions. General Information – The third section served two purposes. It gathered general information about the enterprise the respondent worked for, such as geographical location, the number of employees, the type of industry the enterprise operates in, and the respondent’s position level in the organisation. The second purpose of this section was to gradually ease the respondent into the survey by asking 5 of the least demanding questions first, which related to the AE scale. Key Components Factors – The fourth section presented the four key components (SOPI) of an AE. As discussed in section 5.2.6 a seven-point Likert scale was used for the responses. The questions in this section were generated from the development of a set of proposed measurement scales that were identified through the approach discussed in section 5.2.5. The 15 questions each related to the relevant measurement scale and consisted of 5 Strategy, 3 Organisation, 2 Process, 2 Information, and 3 Interwoven Structure questions. Behavioural Flows and AE Factors – Section Five consisted of 13 questions, 12 of which were designed to gather data in order to understand the three key Behaviour Flows at the different levels of an AE while 1 question measured Interwoven Structure. Specifically, these Explanatory Survey 153 questions related to the respondent’s position level in the enterprise and all but 1 were used to measure factors that related to the Behavioural Flows measurement scales discussed in section 5.2.5. The seven-point Likert scale (section 5.2.6) was also used for this section. Seven of the questions in this section related to the Behavioural Flows scales and 3 questions related to the Interwoven Behaviour scale. The 2 questions that related to the AE measurement scale served a dual purpose. Apart from their association with the AE scale, they were also used to check the validity of the answers given for the other 11 questions. Behavioural Flows and AE Factors – Section Six replicated, in part, Section Five. It consisted of 14 questions, 10 of which were phrased the same as those presented in Section Five and for the same reason i.e. to measure a set of factors that related to the Behavioural Flows and Interwoven Behaviour Model. However, in this instance, respondents were instructed to answer these questions from the perspective of their entire enterprise. As in the previous section, the remaining 4 questions relating to the AE measurement scale were used to measure the construct as well as confirm validity of the answers to the previous 10 questions. Demographic Details – The final section of the survey, Section Six, consisted of only 2 questions that asked for basic demographic details about the respondent’s gender and age group. The survey ended with a statement thanking the respondents for their input to the study. Survey Administration This section discusses the administration process used for the AE survey and outlines the response distribution. The data collected by the survey instrument was used to test the nine hypotheses as presented in Table 5.1. The AE survey was administered online and operationalised using SurveyMonkey, the online survey software and questionnaire tool. This administration option was chosen for several compelling reasons, the least of which is that Web-based surveys are now the preferred option for many research studies (Lowry et al., 2016). Web-based surveys also offer advantages over traditional methods, such as postal surveys, as they are significantly faster and less expensive than other administration options and empirical evidence suggests that they yield more complete and accurate data (Dillman, 2007; Fowler 2014; Fink, 2015). In addition, Web-based surveys enable real-time response Explanatory Survey 154 validation, automated data entry, and programmable context-sensitive skip patterns (Dillan, 2007). Selection of Respondents The sample frame for the survey was “individuals currently employed in business organisations”. Quota sampling was used to select the number of respondents. From a research perspective, “…the quota sample is claimed by some practitioners to be almost as good as probability sampling” (Bryman, 2015 p. 187). More importantly, Bryman (2015) argues that quota sampling is “…usefully employed in relation to exploratory work from which new theoretical ideas might be generated” (p. 190) and is often used when developing new measures. For practical reasons the number of respondents had to be limited to 500. It was decided that the target population of interest was employees working for business enterprises located in the USA. There are two key reasons for choosing this particular population. In the context of AE, the size of the enterprise is of importance as well as the industry context. In general, a larger system is a more complex system and complexity can influence how an organisation adapts to its environment (Bennet & Bennet, 2004). Similarly, industry context also affects the structures and behaviours of an AE and so is a key element in its development (Haeckel, 2016). Hence, it was important to target respondents who worked in a range of enterprises from small to large and in a variety of industries. To fulfil these desired population characteristics, selecting the survey sample from the business enterprise population in the USA was considered to be a judicious decision. An additional reason was that a large sample of at least 500 observations was deemed a suitable size for the statistical analysis. To meet the survey objectives and quota sampling strategy, it was decided that a sample consisting of three equal units that represented senior management, middle management and operational level employees was required. Online AE Survey Administration SurveyMonkey was engaged to distribute the self-administered survey. An interactive link to the survey was emailed to the potential respondents who were representative of the target population i.e. people currently working in a business enterprise. The survey was available Explanatory Survey 155 online for 14 days and in that time 782 responses to the questionnaire were submitted, of which 504 questionnaires were completed in full. The number of completed questionnaires represented a 64% response rate which is not only well above the 43% mean response rates for online surveys as reported by Nulty (2008) but also above the 50% which is considered an acceptable response rate for social surveys (Nulty, 2008). This meant that the sample size was representative of the target population and sufficient for hypothesis testing using statistical analysis. Sample Demographics To evaluate the survey sample both demographic data about the respondents, and general data about the enterprises they worked for, was collected. In terms of the enterprise and its industry type, a wide variety of industries were represented as shown in Table 5.2. Approximately 30% of the enterprises were either in a Business Services, Education, Tertiary Institutions or Information and Communications Technologies industry, while the highest single percentage of 19.25% was for the “Other” industry category. The diversity of industries reflected in the survey results satisfied one of the two key sampling requirements mentioned in section 5.3.1. Table 5.3 shows the total number of employees in the enterprise worldwide, and that the survey captured a representative distribution of enterprise sizes that range from 5 to 100,000 plus employees. This result satisfies the second of the two key sampling requirements, which was that respondents work in a range of enterprises from small to large and in a variety of industries (refer section 5.3.1). Just over 50% of respondents were employed in enterprises with fewer than 1000 employees. The highest percentage of respondents (14.09%) worked for enterprises with 1,000 – 5,000 employees and, notably, 6.94% of respondents were from very large enterprises with more than 100,000 employees. The questionnaire asked the respondent to answer questions from the perspective of their position level in the enterprise for which they worked (refer section 5.2.8) and also asked about the number of employees in their particular department. Explanatory Survey 156 Industry/Enterprise Type % of Total Responses Response Count Aerospace 2.18% 11 Agriculture 0.99% 5 Aquaculture 0.00% 0 Biotechnology 0.99% 5 Business Services 10.32% 52 Communication/Media 0.60% 3 Construction 6.75% 34 Education, Tertiary Institutions 11.90% 60 Energy 0.99% 5 Export/Import 0.00% 0 Forestry 0.00% 0 Government/Public/Defence 4.17% 21 Healthcare 9.92% 50 Horticulture 0.20% 1 Information and Communications Technologies 10.91% 55 Manufacturing 6.94% 35 Mining 0.60% 3 Non-governmental Organization 0.99% 5 Retail Trade 8.13% 41 Transport/Storage 1.59% 8 Wholesale Trade 2.58% 13 Other (please specify) 19.25% 97 Table 5.2: AE Survey Industry/Enterprise Types Just over 50% of respondents worked in departments with fewer than 20 people while 9.33% of respondents worked in departments with 500 plus staff. Table 5.4 shows the distribution of the respondent’s per size of department. As the AE survey was distributed to USA employees of USA business enterprises, the majority of respondents (97.42%) indicated that they were based in the USA i.e. 491 respondents, while the remaining13 respondents, were based in other countries (refer Table 5.5). Explanatory Survey 157 Number of Employees in Enterprise % of Total Responses Response Count 1-5 11.11% 56 6-19 6.75% 34 20-49 5.95% 30 50-99 6.15% 31 100-499 14.29% 72 500-999 11.11% 56 1,000-4,999 14.09% 71 5,000-9,999 8.13% 41 10,000-19,999 8.33% 42 20,000-49,999 3.77% 19 50,000-99,999 3.37% 17 100,000+ 6.94% 35 Table 5.3: Total Number of Employees in the Enterprise Number of Employees in Department % of Total Responses Response Count 1-5 23.41% 118 6-19 23.81% 120 20-49 17.46% 88 50-99 12.10% 61 100-499 13.89% 70 500+ 9.33% 47 Table 5.4: Number of Employees in the Enterprise Department Analysis of Results The data collected by the AE survey instrument was subjected to a robust statistical analysis process to facilitate the hypothesis testing. This analysis process consisted of three main interconnected phases and utilised several statistical techniques in order to a) prepare the dataset and assess it to identify missing data and outliers (section 5.4.2), b) measure and evaluate correlation between the variables (section 5.4.5), and c) perform a multi-stage factor analysis consisting of an EFA (section 5.5.2), CFA (section 5.6) and SEM (section 5.7.1). These three phases in the analysis process are shown in Figure 5.7. Explanatory Survey 158 Country of Respondent’s Workplace % of Total Responses Response Count Australia 0.20% 1 Austria 0.20% 1 Barbados 0.20% 1 Central African Republic 0.20% 1 China 0.40% 2 Germany 0.20% 1 Mexico 0.20% 1 South Africa 0.20% 1 Sweden 0.20% 1 United Kingdom 0.60% 3 USA 97.42% 491 Table 5.5: AE Survey Country Location of Respondents Assessment of the Survey Data In this section the preparation and assessment of the survey data for statistical analysis is discussed. The conditions that had to be met to prepare the data for the factor analysis and the results of the data assessment will be described. Figure 5.7: The Phased Analysis Process for the Survey Missing data and Outliers After the 782 responses were received, all responses were examined and 278 questionnaires were found to be incomplete. Over half of the incomplete questionnaires were assessed as Explanatory Survey 159 ‘ineligible respondents’ as the respondents did not meet the criteria for participation (refer section 5.3.1). The remaining incomplete responses were examined further and the type of data that was missing could not be ignored or accounted for by procedural aspects so they were also considered ineligible (Schafer & Graham, 2002; Hair et al., 2014). Therefore, the remaining 504 complete responses, without missing data, were considered suitable for analysis. The data from the 504 responses was screened for outliers that could potentially be removed from the dataset. An outlier is defined as an atypical observation because its value is significantly different to the other observation values, and so is not representative of the population. Outliers can have a univariate, bivariate or multivariate profile (Hair et al., 2014) and it will be explained next how the survey data was evaluated for each of these profiles by the use of established methods. SPSS was used to detect possible univariate outliers. The numeric values (rating values) were converted to standard z scores with a mean of 0 and standard deviation of 1 (Hair et al., 2014; Kline, 2015). As a general rule, continuous variables univariate outliers are those with a z score greater than +3.29 (P < .001, two tail test) (Tabachnick & Fidell, 2007, 2013). However, it is accepted that for larger sample sizes the threshold value of standard scores can be raised to 4.00 (Hair et al., 2014). By applying these thresholds to the survey data two outliers were revealed (case 30, z score 3.37442 and case 31, z score 4.10074). For case 30, the z score was marginally over the recommended conservative value of 3.9 and well under the 4.00 so it was retained. However, case 31 required consideration about whether it should be retained (Hair et al., 2014). Some authors argue that if only a few outliers exist, and they are not extreme, they can be retained (Cohen et al., 2013). As there was only the one outlier it was decided that case 31 would be retained and not deleted from the dataset. Bivariate and multivariate outliers are observations with combined values that fall in the outer range of the observations distribution. The bivariate and multivariate identification of outliners examines the relationship of several variables whereas the univariate method for detection is performed independently on each variable (Cousineau & Chartier 2015. The standard way of identifying multivariate outliers is through the Mahalanobis D2 measure, which is, “a multivariate assessment of each observation across a set of variables” (Hair et al., 2014, p.64). It is generally accepted that outliers are not always problematic and can even be Explanatory Survey 160 beneficial to the analysis. In certain circumstances they can reveal characteristics of the population that would otherwise remain undiscovered by the usual analysis procedures (Hair et al., 2014). So, unless there is absolute proof that multivariate outliers are truly atypical and unrepresentative of other observations, they should not be deleted (Osborne & Overbay, 2004; Hair et al., 2014). The argument is that multivariate outliers will almost always exist in most datasets and removing them will only encourage more to emerge. This is particularly true of larger datasets as they are more representative of the population and therefore the likelihood of outlying values increases (Osborne & Overbay, 2004). Taking into account the aforementioned recommendations, and the characteristics of the survey population, the multivariate outliers that existed were retained due to the likelihood they would be beneficial, rather than detrimental, to the study. Sample Size The AE survey instrument yielded a total of 504 completed responses that was comprised of three sample units: 188 responses from senior managers, 141 responses from middle managers and 175 responses from operational level employees. The sample size is an important element that is usually determined by theoretical and practical considerations (refer section 5.4.3). Theoretically, a larger sample is always better as it minimises sampling error (Dillman, 2007). In terms of factor analysis, the minimum sample size is often established by the number of latent variables that are expected to emerge. Factor analysis techniques such as EFA and SEM require relatively large sample sizes to ensure the parameter estimates are reasonably stable (Kline, 2015; Schafer & Graham, 2002) and samples of 200 or more are recommended (Kline, 2015). Although no definitive guidelines have been established Hair et al. (2014) suggest that a sample size of at least five times the number of observations to the number of factors is desirable. Some authors argue a much higher 20:1 ratio of the number of respondents to the number of factors is ideal but acknowledge that this ratio might not be practical and concede that a 10:1 ratio is more realistic. For this research 38 factors were assessed in the survey by 504 respondents. These factors were subsequently evaluated and confirmed, which resulted in a 13:1 ratio that satisfactorily meets the sample size requirement. Explanatory Survey 161 Normality Assessment It is suggested that for many of the statistical analysis methods both univariate and multivariate normality of the dataset is required. This is often recommended for the EFA and CFA methods so an appropriate estimation technique can be used. However, Hair et al. (2014) argue that multivariate normality is only required when it is critical, and that univariate normality for all of the factors is sufficient. The authors also suggest that a simple assessment of the statistical z value, for the skewness value, can be used to evaluate data normality, and that a calculated z value that is beyond the range of +2.58 (for the 0.01 significance level) and +1.96 (for the 0.05 significance level) indicates that the distribution of the factors is nonnormal. Alternatively, the factors’ absolute values of skewness and kurtosis can be used and that often factors with absolute values greater than 2 and 7 respectively are tagged as nonnormal. However, Kline (2015) offers less stringent criteria for assessing data normality and advocates that factors with skewness absolute values greater than 3 and kurtosis values above 8.0 are considered non-normal. In order to assess the normality of the dataset for this survey a descriptive analysis was conducted. The results showed both visually, by inspecting the distribution curves for each factor, and according to the absolute values of the skewness and kurtosis, that the distribution was non-normal (refer Appendix D4). This was confirmed by the z values, which also reveal that a number of the factor values are statistically significant. Overall, the distribution tended to be skewed to the left with a positive Kurtosis. However, the critical judgement is to assess whether the distribution’s non-normality is sufficiently large to be of concern, which is highly dependent on the sample size. It is generally accepted that larger sample sizes reduce the effects of non-normality and so it is less problematic (Hair et al., 2014). Curran, West and Finch (1996) demonstrated that for CFA with sample sizes of 500 observations the nonnormality bias is almost non-existent. The dataset for this research contained a number of non-normal items and, although the sample was large, the Satorra-Bentler scaled Chi-square test was applied during the analysis to override the issue of non-normality in the data. The Satorra-Bentler scaled Chi-square test is frequently used as it is robust (Satorra & Bentler, 2001). Explanatory Survey 162 Correlations A correlation matrix is a set of correlation coefficients among all the variables being considered in the survey research. They are useful because they show that a predictive relationship exists between two random variables so assessment of the matrix is an important step when conducting a factor analysis (Bollen & Lennox, 1991; Tabachnick, Fidell & Osterlind, 2001). However, if the correlation coefficients indicate that many of the factors are not related then the use of factorial procedures for analysis would be questionable (Dziuban & Shirkey, 1974; Tabachnick, Fidell & Osterlind, 2001). Initially, two well-recognised tests, for measuring correlation between variables, were performed. They are the Spearman and Kendall rank correlation (2-tail) tests. Both tests are non-parametric tests for ordinal data and are used when the data is non-normal. The resulting inter-item correlation matrix showed a substantial number of moderate to strong correlations that were significant at the 0.01 level (2-tail) indicating that they could be used as a basis for the formation of meaningful components. Factor Analysis This section discusses the process by which the construct validity and measurement models (refer Figures 5.2 and 5.3) were assessed using factor analysis. As the first step an Exploratory Factor Analysis (EFA) was performed to identify factors that best characterise the set of variables, and prepare these variables for the next step of the analysis. As the first step, an Exploratory Factor Analysis (EFA) was performed to identify factors that best characterise the set of variables and prepare these variables for the next step of the analysis. This next step was a Confirmatory Factor Analysis (CFA), which preceded the Structural Equation Modelling (SEM) as it is common practice to first assess the measurement model before evaluating the structural model (Jackson, Gillaspy & Purc-Stephenson, 2009). The purpose of an SEM is to analyse the theoretical relationships between the measured variables and latent constructs. For this reason, the CFA was followed by the SEM. Before assessing the statistical significance of the relationships using an SEM analysis, it is important to note the relevant characteristics of the technique. Kline (2015) argues that although numerous effects can be tested for statistical significance using SEM, the statistical Explanatory Survey 163 tests play a lesser role in an SEM assessment in comparison to other techniques, such as ANOVA. Kline (2015) suggests that there are four reasons which support the use of the technique for this statistical analysis. Firstly, SEM enables the entire model to be assessed as it provides a high-level perspective that enables a holistic examination of the model, rather than just assessment of individual effects. Second, the requisite large sample size for SEM will, for the most part, generate statistically significant results (Thompson, 1992). The third reason is the p values, for effects of the latent variables, are estimated by the software package and so will vary, albeit slightly, depending on the software that is used. The fourth reason is that examination of the model should focus on estimating the magnitude of the effects rather than the outcomes of the statistical tests. Therefore, SEM will generate better estimates of the size of each of the effects in this study, unlike more traditional techniques such as ANOVA. Measurement Model Assessment The two structural measurement models were assessed after the data had been evaluated and considered suitable for statistical analysis. The first of the two hypothesised models, the Adaptive Zone model (refer Figure 5.2), is comprised of four reflective first-order constructs and two second-order constructs i.e. Interwoven Structure (IS) and Adaptive Enterprise (AE). The second structural measurement model, the Behavioural Flows model (refer Figure 5.3), includes three reflective first-order constructs and two second-order constructs i.e. Interwoven Behaviour (IB) and Adaptive Enterprise (AE). Throughout the remainder of this chapter, for simplicity and clarity, the Adaptive Zone measurement model will be referred to as AZM Model and the Behavioural Flows measurement model will be referred to as BFM Model. A multi-step process for conducting Factor Analysis, as advocated by Furr, (2011), has been adapted to include the EFA process and is presented in Figure 5.8. Figure 5.8: The Factor Analysis Process (adapted from Furr, 2011) Explanatory Survey 164 Exploratory Factor Analysis EFA is a method used to detect underlying relationships among measured variables which, in this study, are the questionnaire items. EFA is widely utilised and is described as a “complex multi-step process” (Osborne & Costello, 2009 p. 131). The EFA method identifies the factors contained in a dataset based on correlations among variables (Field, 2013; Tabachnik & Fidell, 2013). The variables best characterise a particular set of measured factors or constructs. Essentially, EFA is based on the assumption that any variable may be related to any factor, so the purpose of the analysis is to identify which of the variables have the highest loadings on the different factors. There are a number of factor extraction methods to choose from to discover best variable/factor model fit, such as principal axis factoring (PAF), principal components analysis (PCA), and Maximum likelihood (ML). Each of these methods are recommended for data that is normally distributed, however, the data gathered for this study has a non-normal distribution (refer section 5.4.4). In these instances, the methods recommended are those that are classified as principal factor methods, such PAF procedure in SPSS and the R statistical software (Osborne & Costello, 2009). In addition to choosing an appropriate factor extraction method, a decision needed to be made about which factor rotation method would be the most suitable. As with the extraction method, there are several rotation methods from which to choose. These methods fall into one of two categories, orthogonal methods and oblique methods. Orthogonal methods require factors to be uncorrelated whereas oblique methods allow the factors to correlate. The oblique methods are often used in the social sciences because it can be assumed that there is some correlation among the factors. Direct oblimin, quartimin and promax are oblique methods commonly available in statistical software (Osborne & Costello, 2009). Since this research is in IS, a social science, it was anticipated that the factors are correlated, in some way. The promax rotation method was chosen for the EFA analysis because it has the advantage of being fast and conceptually simple (Abdi, 2003). An EFA analysis was conducted using the collected data in order to identify the variables that best characterise each of the measurement models’ factors. Although a set of variables had already been proposed for the 10 factors of both measurement models, performing an EFA Explanatory Survey 165 enabled the evaluation and discovery of the relationships among those variables and factors. The variables and factors that emerged from the data after performing the EFA were then used to conduct the CFA and SEM analysis. 5.5.2.1 The EFA Process To perform the EFA, and the subsequent CFA and SEM, R statistics software was used to evaluate both measurement models, including the variables and factors of each. AZM Model was the first to be examined followed by BFM Model. The following section describes the EFA evaluation steps. The first step of the EFA process was to extract the factors underlying the data, together with the variables and their loadings. The process involves complex matrix algebra to compute calculations, and the basic statistic used is the correlation coefficient that determines relationships between the variables (Yong & Pearce, 2013). There are several approaches to determine the number of factors to extract but the basic premise of each approach is either analytical or visual. This study used the analytical techniques of Kaiser Criterion (eigenvalue greater than one) and variance explained criteria (the cut-off values can vary from 0.9 to as low as 0.1). The visual technique used was Cattell’s Screeplot, which is a graphical representation of the factors and the corresponding eigenvalues. The graphical analysis involves plotting of the eigenvalues in descending order of their magnitude against their factor numbers. Cattell’s Scree Test is used for determiningthe factors to be retained. The algebraic matrix calculations produce eigenvectors (factors) that are linear representations of the variables shared variance (Field, 2013; Tabachnik & Fidell, 2013). Essentially, the longer the eigenvector the more variance it explains, with the eigenvectors that explain the most variance in the data being retained (Field, 2013). By examining the screeplot generated by the statistical package, the factors with values above the ‘break point’ (the point at which the curve flattens out) are the ones that should be retained. The R software indicates this break point on the screeplot by placing a line through it. The EFA analysis was performed using the ‘factanal’ maximum likelihood function which is when the factanal function fits a common factor model by the method of maximum likelihood. The rotation option used for oblique factors was ‘promax’ and the ‘Bartlett’ option produced Explanatory Survey 166 the factor scores. In the following sections on EFA (refer sections 5.5.2 - 5.5.2.6), the EFA assessment procedures for AZM Model and BFM Model will be discussed. 5.5.2.2 EFA Assessment Adaptive Zone Measurement Model As the purpose of the EFA was exploratory, the assessment began by specifying a 6 factor analysis to identify the potential underlying relationships of the 27 measured variables (the 27 questionnaire items) stipulated in the EFA AZM Model and included in the analysis (refer Figure 5.5). The reason for starting with 6 factors priori criterion was that the proposed measurement model (refer Figure 5.2) is comprised of 6 constructs (Osborne & Costello, 2009). The cut-off value to display the loadings was set at the accepted valued of 0.3 (Tabachnick & Fidell, 2013). After examining the output, the indicators implied that the 6 factor solution was not a good fit. The p-value was low, which suggested that the hypothesis ‘6-factors fit the data perfectly’ could be rejected. In addition, 2 of the 6 factors had very low loadings and so accounted for a minimal amount of the variance in the data. In accordance with the factor extraction procedure, as advocated by Osborne and Costello (2009), the data was run again setting the number of factors at 7, then 5, in an attempt to generate the “cleanest” factor solution. The cut-offs were initially set at 0.3, then 0.1, in order to fully interpret which factor solution was the better fit. Both of these analyses yielded results similar to the 6 factor solution. The 5 factor solution, however, demonstrated an improvement in the p-value statistic. As part of the exploratory and assessment process, the data was analysed using the nFactors software to help determine whether 5 was the optimal number of factors to extract. This software offers additional functionality to support a decision in regard to the number of factors to fit. In this case, the nFactors results suggested a 4 factor solution was the best fit for the data. Resuming the assessment using R, a 4 factor priori criteria was specified, with the cutoff set at 0.3. The results were promising as the 4 factor analysis showed an improvement in the p-value, and so indicated a better fit compared with the 5, 6 and 7 factor solutions that had already been run. Explanatory Survey 167 The visual examination of the screeplot (refer Figure 5.9), which shows the solution to the scree test and significant eigenvectors, indicated that the 4 factors with values above the ‘break point’ explained the largest amount of variance in the data, and should therefore be retained. The analytical results of the 4 factor solution, as shown in Table 5.6, were examined to verify patterns of significant factor loadings. This included issues of the insignificant factor loadings, and the cross-loading of a variable on more than one factor. Figure 5.9: The Screeplot of Factors Underlying the EFA AZM Model Dataset Factor loadings represent how much of the variance in the data is accounted for by the factor solution. As a general rule, factor loadings of 0.50 and above are significant. However, in larger sample sizes (greater than 350 observations), factor loadings of 0.30 can be considered as having practical significance (Yong & Pearce, 2013; Hair et al., 2014). However, Field (2000) Explanatory Survey 168 suggests that interpreting factors with loadings of less than 0.40 should be done with care as “the significance of a loading gives very little indication of the substantive importance of a variable to a factor” (p. 441). This study takes a conservative approach and so variables with loadings greater than 0.5 were considered significant and were consequently retained (Osborne & Costello, 2009). In terms of the cross-loading of one variable onto more than one factor, Tabachnick and Fidell (2013) cite a minimum loading of 0.32 for such items. However, Hair et al., (2014) suggest that if the cross loading is insignificant, but the variable has a theoretical and substantive significance, then the variable should be retained. After considering these recommendations, and for the purposes of this study, it was decided to interpret the 4 factor solution. The factor loadings of the rotated factor matrix (refer Appendix E1 p. 424) revealed that the amount of variance explained by the 4 factor solution was 51%. While certain cumulative percentages have been suggested, no set threshold exists. For example, in the humanities 50% of the explained variance is common, although Hair et al. (2014) explains that 60% is generally accepted, and notes that eliminating the low loading variables will improve this value. However, variables with low factor loadings should still be evaluated and a decision made as to whether they should be retained even though they do not account for a significant amount of variance in the dataset (Osborne & Costello, 2009; Hair et al., 2014). When examining the pattern matrix of the 4 factor solution, the factor loadings show that each factor has 4 or more variables with significant loadings that are greater than 0.5 (Osborne & Costello, 2009), whereas four of the variables did not perform as predicted as they have loadings of less than 0.5. In addition, two of these particular variables exhibit cross loadings on Factor 1 and Factor 4. To improve the fit of the model, and taking into account the aforementioned recommendations by Hair et al. (2014), the 4 low loading variables (less than 0.5) i.e. q16-01, q18-01, q24-01 and q31-01 were evaluated to decide whether they should be retained. The q16-01 and q18-01 variables were questions in the survey that measured ‘Flexible Organisation’ and ‘Loosely Coupled Process’ constructs respectively. The questions could be considered domain specific questions and so would have a precise meaning to respondents who Explanatory Survey 169 are knowledgeable in the areas of management and BP. However, as respondents were from a variety of industries, organisational backgrounds, and work functions, the questions could have been interpreted differently by others. Therefore, it could be assumed that this might account for the unique variance and low factor loadings. Hence, due to the doubt about question interpretation it was decided to eliminate these two low loading variables. Although the cross-loading variable q24-01 “The Strategy drives the design of appropriate Processes, Organisational structures and Information systems” was cross-loaded onto 2 factors with low loadings of 0.30 and 0.34, it was considered to be important to the Interwoven Structure construct from both theoretical and substantive perspectives. Therefore, in line with common practice, this variable was retained (Osborne & Costello, 2009; Hair et al., 2014). The final variable evaluated was q31-0, “I am adaptive”. This question was included in the survey as it was relevant to the overall research and also provided a logic check of the respondent’s other answers. However, it was considered as being less relevant to the hypothesis testing because it was of minor theoretical value to the Adaptive Enterprise construct. This was confirmed by its low factor loading and, for the aforesaid reasons, was consequently eliminated from the analysis. 5.5.2.3 Final EFA Adaptive Zone Measurement Model After the initial evaluation of the 4 factor solution, 24 of the initial 27 variables were retained and the 4 factor analysis was rerun to re-evaluate its robustness. As with the previous factor solutions, the analysis was performed using the ‘factana’ maximum likelihood function with the ‘promax’ rotation for oblique factors. The results of this 4 factor analysis were promising. The p-value was significant and this re specified 4 factor solution was an improvement on the previous one. The amount of explained variance had increased to from 51% to 55% and there were three other notable changes from the previous 4 factor solution. First, the pattern matrix indicated that all the factor loadings were now significant >0.04 (Field, 2000; Yong & Pearce, 2013; Hair et al., 2014) and second, the previously cross-loading variable q24-01 now loaded onto only Factor 4. The third change was that variable q17-01 that had previously loaded onto Factor 2, now loaded onto Factor 3 (refer Table 5.6). Explanatory Survey 170 Given that there were 4 obvious factors to the solution and all the variable loadings were significant, the next task was to give each factor a meaningful name. Traditionally, this decision is based on what constructs are common to those variables with significant loadings, and then naming the factor after that construct (Steiger, 2009; Hair et al., 2014). Variables Factor 1 ISS Factor 2 AE2 Factor 3 SOP Factor 4 AE1 q15-01 0.53 q19-01 0.60 q20-01 0.77 q21-01 0.77 q22-01 0.92 q23-01 0.86 q08-01 0.74 q08-02 0.82 q08-03 0.83 q08-04 0.82 q08-05 0.80 q09-01 0.66 q10-01 0.78 q11-01 0.60 q12-01 0.77 q13-01 0.83 q14-01 0.53 q37-01 0.80 q38-01 0.74 q39-01 0.71 q40-01 0.89 q17-01 0.49 q24-01 0.40 SS loadings 3.67 3.30 3.25 3.08 Proportional Var 0.15 0.14 0.14 0.13 Cumulative Var 0.15 0.29 0.43 0.55 Table 5.6: EFA AZM Model Four Factor Solution and Factor Loadings Explanatory Survey 171 Elements of integration and IS appeared to be themes of factor 1 so it was named ‘Interwoven Structure and Systems’ (ISS) after the constructs Interwoven Structure and Information. Strategy and organisation were the key themes of the variables that loaded onto factor 3, hence the name ‘Strategy Organisation and Process’ (SOP) after the SOPI constructs. Lastly, the adaptive theme was common to factors 2 and factor 4. However, extrinsic adaptive behaviours distinguished factor 2 while intrinsic adaptive behaviours defined factor 4. Hence the factors were respectively named ‘Adaptive Enterprise 2’ (AE2) and ‘Adaptive Enterprise 1’ (AE1). 5.5.2.4 EFA Assessment Behavioural Flows Measurement Model EFA BFM Model was similarly assessed through the use of R and the ‘factana’ maximum likelihood function. The rotation option for oblique factors was ‘promax’ and the ‘Bartlett’ option produced the factor scores. The cut-off value to display the loadings was set at 0.3. Initially, a 5 factor priori criteria was specified to identify the underlying relationships between the 28 measured variables (28 questionnaire items) stipulated in EFA BFM Model (refer Figure 5.6). This 5 factor priori specification was based on the proposed 5 construct Behavioural Flows measurement model (refer Figure 5.3). Examining the output, the screeplot shown in Figure 5.10 indicates that 4 factors, with values above the ‘break point’, explain the largest amount of variance in the dataset. In addition, the analytical results of the 5 factor solution were inspected in order to verify patterns of significant factor loadings and cross-loadings. As with EFA AZM Model, factor loadings greater than 0.40 and above were considered significant (Osborne & Costello, 2009). The pattern matrix showed that 4 of the 5 factors had variables of significant loadings (>0.4) while the remaining factor had 3 variables of insignificant loadings (q26-01, q27-01 and q34- 01) and included one variable that cross loaded (q26-01). The 3 low loading variables were evaluated in terms of their theoretical and substantive importance to the hypothesised construct to decide whether they should be retained. All 3 variables were associated with Interwoven Behaviour construct. The cross loading variable q26-01, was mirrored by the significant variable q33-01 and so, for this reason it was eliminated, despite the question being asked from a different perspective. The other 2 variables (q27-01 and q34-01) mirrored each Explanatory Survey 172 other as the same question applied to different organisational levels. They were also partially reflected in 6 other significant loading variables: q25-01, q25-02, q25-03, and q32-01 q32-02 and q35-03. Therefore, q27-01 and q34-01 were considered less important and, given their low factor loadings, were eliminated from the analysis. Figure 5.10: The Screeplot of Factors Underlying the EFA BFM Model Dataset 5.5.2.5 Final EFA Behavioural Flows Measurement Model After evaluation of the 5 factor solution, 25 of the initial 28 variables were retained and a 4 factor solution was tested to evaluate its fit. The results of this 4 factor analysis were Explanatory Survey 173 favourable as the p-value was significant and demonstrated an improvement on the 5 factor solution. The amount of explained variance had increased, from 57% to 61%, and the pattern matrix indicated that all the factor loadings were significant (refer Table 5.7). Variables Factor 1 AE3 Factor 2 SILR Factor 3 DADL Factor 4 AE2B Q29-01 0.54 Q30-01 0.59 Q33-01 0.77 Q35-01 0.58 Q35-02 0.65 Q35-03 0.72 Q35-04 0.73 Q36-01 0.81 Q37-01 0.79 Q38-01 0.79 Q39-01 0.79 Q40-01 0.84 Q28-01 0.75 Q28-02 0.83 Q28-03 0.85 Q28-04 0.84 Q25-01 0.56 Q25-02 0.76 Q25-03 0.71 Q32-01 0.75 Q32-02 0.77 Q32-03 0.77 Q08-01 0.75 Q08-02 0.79 Q08-03 0.73 SS loadings 6.92 3.39 3.24 1.79 Proportional Var 0.28 0.14 0.13 0.07 Cumulative Var 0.28 0.59 0.51 0.61 Table 5.7: EFA BFM Model Four Factor Solution and Factor Loadings Explanatory Survey 174 Since the 4 factor solution was a good fit, and all the variable loadings were significant, the next step was to give each factor a meaningful name. Similar to EFA AZM Model, the adaptive theme was common to both factor 1 and factor 4. Factor 1 demonstrated intrinsic, interwoven, adaptive behaviours and so was named ‘Adaptive Enterprise 3’ (AE3). Meanwhile, factor 4 was a subset of the EFA AZM Model AE2 and it too exhibited extrinsic adaptive behaviours and was named ‘Adaptive Enterprise 2B’ (AE2B). The elements of ‘sense’, ‘interpret’, ‘learn’, and ‘respond’ defined the overall theme of factor 2 and, accordingly, it was named SILR. The theme of factor 3 was described by the elements ‘data analysis’ and ‘data latency’ and so was named DADL. Notably, the combined themes of factor 2 and factor 3 characterised the three constructs ‘Blended Information Flows’, ‘Blended Learning Flows’ ‘Blended Control Flows’ in the Conceptual Hypotheses Model of the Behavioural Flows (Figure 5.6). Factor Analysis is an iterative procedure so once the EFA was completed, and a simple structure identified (factors and loadings) for EFA AZM Model and EFA BFM Model, a CFA was performed to verify the hypothesised measurement models. 5.5.2.6 Summary of the EFA Results The data that related to several relevant AE variables was collected by the survey instrument and was assessed using the statistical technique of EFA. The EFA was used to condense the survey data into composite dimensions and identify the underlying factor structure. The analysis was performed using the R statistics software and the ‘factanal’ maximum likelihood function. Two measurement models, EFA AZM Model and EFA BFM Model, which were representative of the two hypothesis models, were assessed in order to detect underlying relationships among the measured variables (survey questions). For EFA AZM Model, a 4 factor solution was determined and the factors were named according to the hypothesis model construct that was common to the factor variables. Hence, EFA AZM Model factors were named: ISS, SOP, AE2 and AE1. In the case of EFA BFM Model, a 4 factor solution was also determined and the factors were named: SILR, DADL, AE3 and AE2B. The following section discusses the CFA that was used to test how well the measured variables represented the constructs (Hair et al., 2014). Explanatory Survey 175 Confirmatory Factor Analysis In preparation for the SEM a CFA is conducted. The purpose of CFA is to test how well a set of measured variables represent a small number of constructs. That is, it assesses the construct validity of the proposed measurement theory. Thus, CFA enables theory-testing, theorycomparison and theory-building from a measurement perspective (Furr, 2011). In addition, using the CFA as an exploratory method of enquiry can provide insight as to which variables are better than others, and why. CFA is unlike EFA in that both the number of factors that exist within a set of variables has to be specified, together with the assignment of which variable loads onto each of the factors (Hair et al., 2014). For the purposes of convention, in the following sections relating to the CFA and SEM, the term ‘measured variable’ will refer to the former term ‘variable’ and the term ‘latent variable’ will refer to former terms ‘factor/construct’. The multi-step process for conducting a CFA, as advocated by Furr (2011), was adapted for this study to include composite reliability and construct validity and is presented in Figure 5.11. Figure 5.11: The CFA Process adapted from Furr, 2011 Develop Theory and Measurement Model As previously mentioned, a CFA was performed after the EFA had identified the simple structures for AZM Model and BFM Model. Prior to running the CFA measurement models were developed, and included the hypothesised relationships between the measured variables and the latent variables. The measurement models were based on the pattern matrices identified in the EFA. The individual latent constructs were defined and the measured variables were specified. Conventional wisdom recommends a minimum of 3, but preferably 4, measured variables for each construct (Hair et al., 2014; Kline, 2015). In the following sections Explanatory Survey 176 on CFA (refer section 5.6), the assessment procedures for both AZM Model and BFM Model will be discussed. 5.6.1.1 CFA Adaptive Zone Measurement Model The measurement model, shown in Figure 5.12, was a reflective model (Coltman et al., 2008) comprised of the four reflective latent variables, and the measured variables, which emerged from the EFA namely, ISS, SOP, AE2 and AE1. Figure 5.12: The a priori CFA AZM Model and the Hypothesised Relationships Explanatory Survey 177 Each of the latent variables had a minimum of 5 and a maximum of 7 measured variables. For purposes of clarity, the developed CFA measurement model was named ‘CFA AZM Model’. 5.6.1.2 Specify Statistic Package Parameters The CFA was set up using the lavaan package in R. Since the data was non-normal the Satorra-Bentler scaled test statistic was applied together with the maximum likelihood estimation. The significance threshold was set at p < .001. The measured variables were allowed to load on the hypothesised latent variable and the cross-loadings were restricted to zero. Also, the latent variables could correlate freely and their variance scales were fixed at 1.00 (Jackson et al., 2009; Kline, 2015). There are numerous fit indices that can be relied on. However, four fit indexes have become popular with researchers and are among the most widely reported in the SEM literature. Each one defines fit in a different way (Hu & Bentler,1999; Hair, 2015), works well for detecting model misspecification, and is not influenced by sample size (Jackson et al., 2009; with Hair et al., 2014; Kline, 2015). The four fit indexes are: Comparative Fit Index CFI (Bentler, 1990), Tucker-Lewis index TLI (Tucker & Lewis, 1973), Root Mean Square Error of Approximation RMSEA (Steiger, 1990), and Standarised Root Mean Square Residual SRMR 2 . More recently, the recommended cutoff values for these indices are: CFI and TLI values should both exceed 0.90 and preferably exceed 0.95, for RMSEA a value less than 0.06, and for SRMR a value less than 0.08 (Hu & Bentler, 1999; Jackson et al., 2009). 5.6.1.3 Compute and Examine Model Fit Indices After examining the output of the CFA the fit indices for the measurement model, shown in Table 5.8, indicate that the model fits the data reasonably well. The CFI and TLI, both of which are ‘goodness-of-fit’ measures, meet the recommended cutoff of close to 0.95. While the RMSEA and SRMR, often referred to as ‘badness-of-fit’ measures, are well within the recommended 0.06 and 0.08 respectively. Other useful parameter estimates, such as the 2 These parameter specification and fit indexes were also used to evaluate the SEM models discussed in section 5.7.1. Explanatory Survey 178 standard errors and standardised loading estimates for the measured variables on the latent constructs, can be seen in Appendix E2. 5.6.1.4 Report Model Fit Indices & Parameter Estimates The CFA demonstrated adequate measurement properties of the operationalised model and that the a priori hypothesised relationship between the underlying latent variables and variables did exist. However, given that this was an exploratory CFA “that attempted to discover potential latent structures among variables” (Segars & Grover, 1993 p. 518), it also indicated that further developing work could improve the four reflective constructs (latent variables) and variables to capture their various dimensions. Notably, operationalisation of the conceptual definitions of the SOP and AE2 latent variables provided insights for their refinement. Apart from the measurement model demonstrating goodness of fit, the final step of the CFA process was to assess the model’s composite reliability and construct validity. CFA Indices AZM Model ML User model versus baseline model: Comparative Fit Index (CFI) >0.90 0.955 Tucker-Lewis Index (TLI) >0.90 0.949 Root Mean Square Error of Approximation: RMSEA <0.06 0.054 90 Percent Confidence Interval (00.049 – 0.044) 0.059 P-value RMSEA <= 0.05 0.113 Standardized Root Mean Square Residual: SRMR <0.08 0.051 Table 5.8: CFA AZM Model Evaluation Indices 5.6.1.5 Determine Model validity Reflective measurement models are assessed in terms of composite reliability and construct validity and can be achieved using measures of internal consistency reliability, convergent validity, discriminant validity, and nomological validity (Straub et al., 2004; Henseler et al., 2009; Hair et al., 2014). In the following discussion the outcomes for each of the reliability and validity assessments will be explained. Explanatory Survey 179 5.6.1.6 Internal Consistency Reliability Internal consistency reliability refers to the inter-correlation among the variables used to measure a single latent construct. Highly inter-correlated variables suggest that they measure the same concept (Henseler et al., 2009; Hair et al., 2014). The Cronbach’s alpha coefficient is the most common measure to test reliability and is expressed as a number between 0 and 1. Kline (2015) describes reliability coefficients values above 0.9 as excellent, those above 0.8 are very good and 0.7 is adequate. However, there is another reliability check that can be derived in CFA that is considered more robust than Cronbach’s alpha. It is the composite reliability (CR) measure that takes into account variables that have different factor loadings and is distinct from Cronbach’s alpha in that it assumes all variables are equally weighted (Hair et al., 2014). However, Peterson and Kim (2013) argue that generally the difference between the two measures is not practically meaningful, so for this reason both measures were used. The Cronbach’s alpha coefficient and CR for each of the latent variables of CFA AZM Model are shown in Table 5.9. Based on these measures, it seems that the 4 latent variables have ‘very good’ internal consistency reliability. Latent Variable Cronbach’s alpha >0.7 CR >0.7 AVE >0.4 - 0.5 No. of Variables ISS 0.891 0.895 0.590 6 AE2 0.911 0.911 0.673 5 SOP 0.843 0.844 0.449 7 AE1 0.926 0.929 0.686 7 Table 5.9: CFA AZM Model Measures of Internal Consistency Reliability and Convergent Validity 5.6.1.6.1 Convergent Validity Convergent validity refers to the degree that a set of variables, that measure a specific latent variable, share a high proportion of variance in common (Henseler et al., 2009; Hair et al., 2014). This can be demonstrated through their unidimensionality, which is traditionally assessed using the average variable extracted (AVE). An AVE value of 0.7 or higher is ideal but at least 0.5 is considered sufficient to establish convergent validity. However, Fornell & Larcker (1981) argue that 0.4 is adequate when the composite reliability is more than 0.6. In Explanatory Survey 180 the case of CFA AZM Model, three of the four latent variables have AVE values of 0.6. The SOP latent variable is above the ‘adequate’ AVE value of 0.4 and also has a CR greater than 0.6 (refer Table 5.9). Hence, there is evidence that CFA AZM Model variables are unidimensional and demonstrate convergent validity. 5.6.1.6.2 Discriminant Validity Discriminant validity refers to the latent variable being truly distinctive since its measurement variables only ever correlate highly with each other. A popular method for assessing discriminant validity is the Fornell-Larcker criterion (Henseler et al., 2009) with the FornellLarker criterion comparing the AVE of a latent variable to the squared correlation between that and another latent variable; claiming that discriminant validity exists when the AVE is greater than the squared correlations. The results of the discriminant validity checks are presented in Table 5.10. An examination of the results reveals two validity concerns (as highlighted in Table 5.10), which indicate that the square root of the AVE for ISS and the AVE for AE2 are less than the absolute value of the correlations with another factor. Latent Variable AVE Squared Correlation ISS AE2 SOP AE1 ISS 0.590 0.768 AE2 0.673 0.642 0.820 SOP 0.449 0.459 0.370 0.670 AE1 0.686 0.832 0.753 0.434 0.828 Table 5.10: The CFA AZM Model Fornell-Larcker criteria test for Discriminant Validity Although the discriminant validity test for CFA AZM Model provided mixed results, it is important to note that discriminant validity is just one of several ways by which to empirically validate a model as it is the theoretical foundation that provides the overriding reasons for constructs (latent variables) correlating or not. It is the latent variables that facilitate the testing of something created from the empirical data (Henseler, Ringle & Sarstedt, 2015). 5.6.1.6.3 Nomological Validity Nomological validity refers to the degree that latent variables are expected to relate to one another. It is tested simply by examining whether the correlations between the latent variables Explanatory Survey 181 of the measurement model are linked in a way that is theoretically predicted. Referring to Table 5.11, the correlations between the CFA AZM Model latent variables ranged from adequate (0.370) to extremely strong (0.832) with 50% of them strong and above 0.6 (Fackrell et al., 2016). In terms of the latent variable ISS, it strongly correlates with AE1 but less so with AE2. Both latent variables AE1 and AE2 characterise the Adaptive theme, they are highly correlated with each other, and have a direct hypothesised relationship with the Interwoven Structure theme. The latent variable SOP is nominally representative of the four hypothesised themes that directly link to the Interwoven Structure concept and is also indirectly linked to the Adaptive theme. This is reflected by the correlation between SOP and ISS, which is higher than that between SOP and AE1 and AE2 (Table 5.11). The correlations suggest that the latent variables behave as expected by reflecting the conceptual hypothesised structure of the measurement model (refer Figure 5.12). Latent Variable AZM Model Correlation AE2 SOP AE1 ISS 0.642 0.459 0.832 AE2 0.370 0.753 SOP 0.434 Table 5.11: CFA AZM Model Latent Variable Correlations The next section outlines the CFA procedures that were performed to test the hypothesis measurement BFM Model. The aforementioned procedures that were performed to test CFA AZM Model were replicated for BFM Model using the 4 latent variables and 25 measurement variables identified through the BFM Model EFA. 5.6.1.7 CFA Behavioural Flows Measurement Model The CFA BFM Model, shown in Figure 5.13 was a reflective measurement model made up of four reflective latent variables, and measured variables, which emerged from the EFA, AE2B, AE3, SILR, and DADL. Each of the latent variables had a minimum of 3 and a maximum of 12 measured variables. The developed CFA measurement model was named ‘CFA BFM Model’. Explanatory Survey 182 Figure 5.13: The a priori CFA BFM Model and Hypothesised Relationships 5.6.1.8 Compute and Examine Model Fit Indices The CFA parameters and fit indices were the same as those employed for the CFA AZM Model as outlined in sections 5.6.1.2 and 5.6.1.3. Examining the output of the CFA, the fit indices for the measurement model shown in Table 5.12, indicate that the measurement model Explanatory Survey 183 fits the data. Both the CFI and TLI meet the recommended cut-off of >0.9. While the RMSEA is close enough to the recommended value and the SRMR is well within the 0.08 cut-off. Other useful parameter estimates, such as the standard errors and standardised loading estimates for the measured variables on the latent constructs, can be seen in Appendix E2. 5.6.1.9 Report Model Fit Indices & Parameter Estimates Since the CFA demonstrated adequate measurement properties of the operationalised model it was concluded that the a priori hypothesised relationships between the underlying latent variables and variables did exist. Also, the parameter estimates indicate that the measurement variables are representative of each of the latent variables. For instance, the standard estimates of the measurement variables’ loadings on to the latent variables range from a minimum of 0.680 to a maximum of 0.880. Hence, the operationalisation of the conceptual definitions of the latent variables demonstrates that the measurement scales are plausible indicators of the hypothesised Behavioural Flows constructs (refer Figure 5.6). The final CFA process step was to assess composite reliability and construct validity of the two models. CFA Indices BFM Model ML User model versus baseline model: Comparative Fit Index (CFI) >0.90 0.917 Tucker-Lewis Index (TLI) >0.90 0.908 Root Mean Square Error of Approximation: RMSEA <0.06 0.067 90 Percent Confidence Interval (00.063 – 0.071) 0.059 P-value RMSEA <= 0.05 0.000 Standardized Root Mean Square Residual: SRMR <0.08 0.049 Table 5.12: CFA BFM Model Evaluation Indices 5.6.1.9.1 Determine Model validity The reflective measurement BFM Model was evaluated using measures of internal consistency reliability, convergent validity, discriminant validity and nomological validity (Henseler et al., Explanatory Survey 184 2009; Hair et al., 2014). The outcomes for each of the reliability and validity assessments will be outlined in the following discussion. 5.6.1.9.2 Internal Consistency Reliability The Cronbach’s alpha coefficient was evaluated for each of the latent variables together with the composite reliability (CR) measure. The Cronbach’s alpha coefficient and CR for each of the CFA AZM Model latent variables are shown in Table 5.13. Based on these measures, it appears that the 4 latent variables have ‘excellent’ internal consistency reliability (Hair et al., 2014). Latent Variable Cronbach’s alpha >0.7 CR >0.7 AVE >0.4 - 0.5 No. of Variables AE2B 0.855 0.855 0.664 3 AE3 0.965 0.906 0.619 12 SILR 0.913 0.914 0.727 4 DADL 0.909 0.965 0.697 6 Table 5.13: Measures of Internal Consistency Reliability and Convergent Validity 5.6.1.9.3 Convergent Validity Convergent validity is traditionally assessed using the average variable extracted (AVE). An AVE value of 0.7 or higher is ideal but at least 0.5 is considered sufficient to establish convergent validity (Henseler et al., 2009; Hair et al., 2014). For the CFA BFM Model, the minimum AVE value of the latent variables is 0.62 and the largest is 0.73 (refer Table 5.13). These values indicate that the variables are unidimensional and that convergent validity exists. 5.6.1.9.4 Discriminant Validity The common method of the Fornell-Larcker criterion for assessing discriminant validity, is where the criterion compares the AVE of a latent variable to the squared correlation between that and another latent variable; claiming that discriminant validity exists when the AVE is greater than the squared correlations. The results of the discriminant validity check for the CFA BFM Model are presented in Table 5.14. It is evident that there are no discriminant validity concerns (Henseler et al., 2009). Explanatory Survey 185 Latent Variable AVE Squared Correlation AE2B AE3 SILR DADL AE2B 0.664 0.815 AE3 0.619 0.748 0.835 SILR 0.727 0.604 0.751 0.853 DADL 0.697 0.581 0.771 0.575 0.787 Table 5.14: Results of the CFA BFM Model Fornell-Larcker criteria test for Discriminant Validity 5.6.1.9.5 Nomological Validity Nomological validity is tested simply by examining whether the correlations between the latent variables of the measurement model are linked in a theoretically predicted way. Referring to Table 5.15, the correlations between the CFA BFM Model latent variables range from moderately strong (0.575) to very strong (0.751). In fact, the majority are strong as 67% are above 0.6 (Fackrell et al., 2016). The variables were slightly modified although they behaved as predicted by reflecting the hypothesised structure of the measurement model (refer Figure 5.13). The modification was that the hypothesised interwoven construct was subsumed by the latent variable AE3 creating a direct causal link between the three Behavioural Flows and Adaptive Enterprise constructs. Latent Variable BFM Model Correlation AE3 SILR DADL AE2B 0.748 0.604 0.581 AE3 0.751 0.711 SILR 0.575 Table 5.15: CFA BFM Model Latent Variable Correlations The latent variables AE2B and AE3 are highly correlated, which isn’t unexpected as they both characterise the hypothesised construct Adaptive Enterprise. The latent variables SILR and DADL are strongly correlated with both AE3 and AE2B. These two latent variables, SILR and DADL, are equally representative of the three hypothesised Behavioural Flows constructs and, notably, directly link to AE2B and AE3. Hence, it can be argued that the correlations of the latent variables ‘make sense’ from a theoretical perspective since they reflect the linkages Explanatory Survey 186 between the themes of the hypothesised measurement model, albeit slightly modified (Hair et al., 2014). Summary of the CFA Results In preparation for the SEM a CFA was conducted to test how well the measured variables, determined by the EFA, represented the constructs. Two measurement models were assessed and the analysis was performed using the R statistics software, and the lavaan package in R was used to compute the relevant fit indices. For the evaluation of CFA AZM Model and CFA BFM Model, 4 latent variables were specified for each as well as the assignment of the measurement variables that loaded onto each of the latent variables (Hair et al., 2014). The fit indices for CFA AZM Model indicated that the model fitted the data reasonably well and provided insight for the refinement of the latent variables SOP and AE2. In the case of CFA BFM Model, the fit indices demonstrated that the measurement model fitted the data and that the variables represented the latent variables. To complete the CFA, the composite reliability and construct validity was checked for both models. Each of the measurement models displayed reliability and validity. The next section discusses the SEM that was performed to test the hypotheses proposed earlier in this chapter (refer section 5.1.2) to gain insight about the structures and behaviours of an AE. Hypothesis Testing Using Structural Equation Modelling After the appropriate factor analysis validity checks had been completed it meant that the nine proposed hypotheses (refer Table 5.1) had been tested. The analysis used SEM to test the hypothesised relationships between the variables (dependant and independent) simultaneously. However, to perform a SEM Tabachnick and Fidell (2007) explain that the following assumptions need to be met: • Multivariate normal distribution • Correct specification of the model • Assumes linear relationship between endogenous (latent) and exogenous variables • Independence of exogenous variables Explanatory Survey 187 • Sufficiently large sample size • Outlier free data • No missing data Each of these assumptions were addressed as part of the overall Factor Analysis (the EFA and CFA) and discussed in sections 5.4.1 - 5.4.5. Kline (2015) and Schumacker and Lomax (2004) suggest that the most difficult part of SEM is justifying the model specification as there must be valid reasons for both the inclusion of specific variables and specification of the model paths. These reasons must be informed by the theory and based on a sound understanding of the criterion variables and issues surrounding their selection. The SEM models assessed in this study meet these criteria as they are grounded in theory, the findings of the Delphi, and are an extension of the models constructed and refined through the EFA and CFA. SEM The SEM was performed using the Lavaan package in R. The data set was the same as that used for the EFA and CFA. That is, the sample size was large at 504 observations and there was no missing data or outliers. Although the data exhibited a slight non-normal distribution, well cited authors concur that this is not a concern especially with large samples of 500 or more (Lei & Lomax, 2005; Hair et al., 2014; Kline, 2015). However, to account for this nonnormality, and ensure robust standard errors and parameter estimates, and goodness of fit, the models were estimated using the maximum likelihood parameter estimate adjusted with Satorra-Bentler scaled Chi-square (Hu & Bentler, 1999; Satorra & Bentler, 2001). The significance threshold was set at p < .005 and the parameters were free to be estimated from the data. A multi-step process for conducting a SEM as advocated by Furr (2011) was adapted for this study and is presented in Figure 5.14. Figure 5.14: The SEM Process adapted from Furr, 2011 Explanatory Survey 188 Based on the findings from the EFA and CFA, the two tested measured models, CFA AZM Model and CFA BFM Model, were re-specified as two SEM models resulting in each SEM model specifying 4 variables. The models were named SEM AZM Model (variables ISS, AE2, SOP and AE1) and SEM BFM Model (variables AE2B, AE3, SILR and DF). For the SEM analysis, a series of three primary fittings (model assessed using full dataset), and twelve stratified fittings (primary model assessed using different management level data), was performed that involved a data specification change for each fit. SEM AZM Model was respecified and fitted ten times (2 primary fittings and 8 stratified data fittings) and SEM BFM Model fitted five times (1 primary fitting and 4 stratified data fittings). The three primary fittings for both SEM models will be described in the next section followed by a brief explanation of the stratified data fittings. The results of the primary fittings, including the fit indexes and regression standard estimates, can be reviewed in Table 5.16 and Table 5.17. 5.7.1.1 SEM Adaptive Zone Measurement Model Using the full data set the first specified model was fitted and the results evaluated. The SOP variable was nominated as the independent variable while ISS, AE1 and AE2 were dependent variables. The path structure was specified as the independent variable SOP has a causal effect on the dependent variables ISS and ISS which, in turn, has a causal effect on the dependent variables AE1 and AE2. The SEM AZM Model (fit no.1), including the specified path structure and standard estimates is depicted in Figure 5.15. Evaluation of the results showed that the fit indexes were very good. The standard estimates between the variables were all significant at p<.005 and ranged from moderately strong (0.47) to very strong (0.85), indicating that the relationship had significant effects. These results suggest that the SEM is plausible and the specification is theoretically justifiable. The SEM AZM Model fittings 1 and 2 results are presented in Table 5.16. Explanatory Survey 189 Figure 5.15: The SEM AZM Model Path Structure and Parameter Estimates 5.7.1.2 SEM Adaptive Zone Measurement Model Mediation Effects The second fitting (fit no. 2) for SEM AZM Model involved respecifying the model to assess the mediation effect of the mediator variable ISS, specifying that SOP was mediated by ISS. It was presumed that ISS, the mediator variable, transmitted some of the causal effects of the preceding SOP onto AE2 and AE1. The SEM AZM Model mediated fitting (fit no.2), including the specified path structure and standard estimates, is depicted in Figure 5.16. Explanatory Survey 190 SEM AZM Model Fitting No. CFI >0.9 TLI >0.9 RMSEA <0.06 SRMR <0.08 Regression Standardised Estimate 1 0.97 0.96 0.04 0.05 ISS ~ SOP 0.47 AE2 ~ ISS 0.65 AE1~ ISS 0.84 Mediated 2 0.97 0.96 0.04 0.05 ISS ~ SOP 0.46 AE2 ~ ISS 0.60 AE1 ~ ISS 0.80 AE2 ~ SOP 0.10 AE1 ~ SOP 0.67 Direct Effect AE2 AE2 ~ SOP 0.10 Direct Effect AE1 AE1 ~ SOP 0.07 Indirect Effect AE2 AE2 ~ SOP 0.28 Indirect Effect AE1 AE1 ~ SOP 0.37 Total Effect AE2 AE2 ~ SOP 0.37 Total Effect AE1 AE1 ~ SOP 0.43 Table 5.16: SEM AZM Model Fittings Path Structure and Parameter Estimates Again, the model showed good model fit according to SEM fit indexes. These index results were similar to the unmediated previous SEM (fit 1). The standard estimates for the indirect and direct effect were statistically significant (refer Table 5.16) as well as demonstrating that is was a partially mediated model. Meaning, it appears SOP has both a direct and indirect casual effect on AE2 and AE1; potentially, ISS mediates the path between SOP and AE2 and AE1. Also, the standard estimates between the variables showed little change compared to the unmediated model indicating that the relationship between the variables had significant effects (0.46 – 0.80). This suggests that the SEM indicates meaningful partial mediation in the model. In addition, indicating the strength of the mediation effect is important and should be reported. There are several indexes that can be used. One commonly used value is the correlation (estimate) of the indirect effect as a proportion of the total effect (Preacher & Kenny, 2011). In this case the indirect effect of SOP on AE2, using ISS as the mediator, accounts for 74% (26% direct effect) of the total effect. The indirect effect of SOP on AE1, using ISS as the Explanatory Survey 191 mediator, accounts for 85% (15% direct effect) of the total effect. This suggests that the mediation effect is strong. Figure 5.16: The SEM AZM Model Partially Mediated Path Structure and Parameter Estimates SEM Behavioural Flows Measurement Model Specification Using the full dataset SEM BFM Model was specified, fitted, and the results assessed. The SILR and DADL variables were nominated as the independent variables and AE3 AE2B were Explanatory Survey 192 the dependent variables. The path structure was specified as both SILR and DADL had a causal effect on AE2B and AE3 as illustrated in Figure 5.17. Figure 5.17: The SEM BFM Model Path Structure and Parameter Estimates Evaluation of the results showed that the fit indexes were good even though the RMSEA was slightly above the preferred cutoff of <0.06. However, Hair et al., (2014) advocate that an RMSEA value of up to 0.10 is acceptable for most models. In this case, the standard estimates between the variables were all significant at p <.005 and ranged from reasonable to moderately strong (0.36 – 0.51), which indicates that the relationship between the variables has significant effects. These results suggest that the SEM is credible and the specification makes theoretical Explanatory Survey 193 sense. The SEM BFM Model fitting results, including the specified path structure and standard estimates, are depicted in Table 5.17. SEM BFM Model Fitting No. CFI >0.9 TLI >0.9 RMSEA <0.06 SRMR <0.08 Regression Standardised Estimate 1 0.92 0.91 0.07 0.05 AE3 ~ SILR 0.47 AE3 ~ DADL 0.51 AE2B ~ SILR 0.40 AE2B ~ DADL 0.35 Table 5.17: SEM BFM Model Fittings Path Structure and Parameter Estimates SEM Adaptive Zone Measurement Model and Behavioural Flows Measurement Model Stratified Data As mentioned in section 5.7.1, apart from the 3 primary fittings for SEM AZM Model (refer Figure 5.15) and SEM BFM Model (refer Figure 5.16), a series of stratified fittings was carried out using four different datasets: High-level managers (188 observations), Mid-level managers (141 observations), Non-managers (175 observations) and combined High and Mid-level managers (329 observations), from which data had been collected. SEM AZM Model and the partially mediated SEM AZM Model were fitted to each of the four datasets – a total of 8 fittings, and then a total of 4 stratified fittings were performed for the unmediated SEM BFM Model. It is important to note that this series of stratified fittings was not performed as, or is claimed to be, a ‘group/multi-group SEM’ where strict rules apply (Byrne, 2013; Hair et al., 2014). The purpose of the SEM assessments, using the various management group data, was purely an exploratory exercise to discover potential insights into the research subject of the structures and behaviours of an AE. Similarly, as with the primary SEM fittings, the 12 stratified data SEM models were evaluated using the fit indexes (CFI, TLI, RMSEA and SRMR) and cutoff values as discussed in section 5.6.1.3. Inferences were made based on the model fit and regression coefficients of the variables, as well as comparing them for each of the organisational level groups. The series of stratified fit results are presented in Table 5.18 and Table 5.19. Explanatory Survey 194 Table 5.18: SEM AZM Model Stratified Fittings Path Structures and Model Fit Indexes Parameter Estimates Examining the model fit indexes indicates that all the stratified fit models, except for SEM BFM Model, fittings 3 and 4 (as highlighted in Table 5.19), demonstrated good fit values. Notably, SEM AZM Model fitting 5 had a slightly raised SRMR (0.09 as highlighted in Table 5.18) but this value was satisfactory as SRMR values of <0.09 are acceptable when the sample size is small (<250) and the CFI is above 0.92 (Hair, et al., 2014). For the poorer fitting models (SEM BFM Model, fittings 3 and 4), the fit indexes suggest that these two measurement models are less valid than the other stratified models. The standard estimates were all significant p<.005. For SEM AZM Model fittings 3 - 6, most of the standard estimates exhibit some change compared to the full dataset fitting 1 (Table 5.16) with the estimates for the High Mgmt. group decreasing the most. Likewise, the mediated fittings 7 - 10 show a slight change in the standard estimates compared to fitting 1, except for those of the High Mgmt. group. For this group the standard estimates for AE2~SOP and AE1~SOP show a notable change from a positive to a negative regression estimate. This change is also reflected in the High/Mid-Mgmt. (as highlighted in Table 5.19). Comparing the SEM BFM Model standard estimates for fittings 2 - 5 with those of fitting 1 (full dataset, refer Table 5.17), there are two changes to note; the first is that the standard estimates for High Mgmt. show a moderate change in the values for all four variables; the second is that the estimates for the independent variable AE2B~SILR change by a reasonable amount across each of the four management groups (refer Table 5.20). Fit No SEM AZM Model CFI >0.9 TLI >0.9 RMSEA <0.06 SRMR <0.08 Regression - Standardised Estimates ISS ~ SOP AE2 ~ ISS AE1 ~ ISS 3 High Mgmt (188) 0.93 0.92 0.05 0.07 0.43 0.46 0.76 4 Mid-Mgmt (144) 0.93 0.92 0.06 0.07 0.40 0.59 0.86 5 Non-Mgmt (175) 0.94 0.93 0.06 0.09 0.41 0.68 0.84 6 High/Mid-Mgmt (329) 0.96 0.96 0.04 0.05 0.41 0.55 0.82 Explanatory Survey 195 Table 5.19: SEM AZM Model Stratified Fit Fittings Path Structures and Mediation Effect Model Indexes Fit No SEM AZM Model Mediation Effect CFI >0.9 TLI >0.9 RMSEA <0.06 SRMR <0.08 Regression - Standardised Estimates ISS~ SOP AE2~ ISS AE1~ ISS AE1~ SOP AE2~ SOP 7 High Mgmt (188) 0.93 0.92 0.05 0.06 0.45 0.51 0.82 - 0.11 - 0.13 Direct Effect AE2, AE1 - 0.11 - 0.13 Indirect Effect AE2,AE1 0.23 0.37 Total Effect AE2, AE1 0.13 0.24 8 Mid-Mgmt (144) 0.93 0.92 0.06 0.07 0.38 0.56 0.80 0.08 0.11 Direct Effect AE2, AE1 0.08 0.11 Indirect Effect AE2,AE1 0.21 0.31 Total Effect AE2, AE1 0.30 0.42 9 Non-Mgmt (175) 0.94 0.94 0.05 0.08 0.39 0.60 0.78 0.17 0.13 Direct Effect AE2, AE1 0.17 0.13 Indirect Effect AE2,AE1 0.24 0.30 Total Effect AE2, AE1 0.41 0.43 10 High/Mid-Mgmt (329) 0.96 0.96 0.04 0.05 0.42 0.57 0.84 - 0.50 - 0.05 Direct Effect AE2, AE1 - 0.50 - 0.05 Indirect Effect AE2,AE1 0.24 0.36 Total Effect AE2, AE1 0.19 0.31 Explanatory Survey 196 Table 5.20: SEM BFM Model Stratified Fittings Path Structures and Model Fit Indexes Summary of the SEM Results In summary, as mentioned at the beginning of this section, the multiple SEM fittings (refer 5.8) were performed for purely exploratory reasons (Asparouhov & Muthen, 2009). The SEM parameters were uncontrolled in that no metric or scale invariances were applied. Therefore, the results are ‘of interest’ and ‘worthy of consideration’ for discovering possible insights into AE. For example, the results exhibit the most change in the estimates for the High Mgmt. group which suggests that some preliminary inferences could be drawn. The following two sections discuss the use of SEM to test the ten hypotheses proposed in section 5.1.2. Hypotheses Testing Procedure SEM was used to analyse the hypothesised relationships simultaneously. The series of fittings, the specification changes, and the results are presented in Table 5.23. Hair, (2014) argues that only achieving a well-fitting model is not sufficient to support a proposed theorised structure as the individual parameter estimates, for each of the predicted paths, also need to be evaluated as they represent a specific hypothesis. This evaluation can be done using the standardised loading estimates, which should be statistically significant and in the predicted direction. According to the overall fit measures for SEM AZM Model (refer Table 5.16) and SEM BFM Model (refer Table 5.17) all the dependent variables indicated an acceptable fit to the data. Hence, in addition to overall fit, the standardised loading estimates were assessed to test each hypothesis. The set of hypotheses (H1 to H5) to test the Adaptive Zone model, followed by Fit No SEM BFM Model CFI >0.9 TLI >0.9 RMSEA <0.06 SRMR <0.08 Regression - Standardised Estimates AE3~ SILR AE2B~ DADL AE2B~ SILR AE2B~ SILR 2 High Mgmt (188) 0.93 0.92 0.06 0.05 0.75 0.20 0.58 0.04 3 Mid-Mgmt (144) 0.83 0.81 0.10 0.08 0.50 0.50 0.33 0.34 4 Non-Mgmt (175) 0.87 0.86 0.10 0.06 0.33 0.62 0.31 0.34 5 High/Mid-Mgmt (329) 0.92 0.91 0.06 0.05 0.59 0.38 0.46 0.21 Explanatory Survey 197 the set of hypotheses (H6 to H9) to test the Behavioural Flows model, were examined (refer Table 5.21). No. Hypotheses Set 1: Adaptive Zone Hypotheses Model H1 A balanced strategy enables an enterprise to be adaptive. H2 A flexible organisation enables an enterprise to be adaptive. H3 Loosely coupled processes enable an enterprise to be adaptive. H4 Dynamic information enables an enterprise to be adaptive. H5 An interwoven structure enables an enterprise to be adaptive. No. Hypotheses Set 2: Behavioural Flows Hypotheses Model H6 Blended information flows enable an enterprise to be adaptive. H7 Blended learning flows enable an enterprise to be adaptive. H8 Blended control flows enable an enterprise to be adaptive. H9 Interwoven behaviour enables an enterprise to be adaptive. Table 5.21: The Two Sets of AE Hypotheses Tested in the Survey Hypotheses Tests: Adaptive Zone Hypotheses Model H1 to H5 The first set of hypotheses H1 to H5 were tested using four constructs that resulted from the EFA and included SOP, ISS, AE1 and AE2. Hypotheses H1 to H4 sought to understand how each of the key components of an AE enables an enterprise to be adaptive. This was explored by hypothesising that an adaptive zone exists for each of the key components of an AE, and is based on the premise that as an enterprise becomes more adaptive as it is propelled toward, and resides in, an adaptive zone. H5 hypothesised that an interwoven structure of these key components (SOPI) also enables an enterprise to be adaptive. This set hypotheses were empirically tested using Factor Analysis. Six constructs were initially proposed for the factor analysis of the Adaptive Zone Hypothesis Model, and which included the EFA, CFA and SEM. However, the EFA (refer section 5.5.2) indicated that only four factors fitted the data and that 24 of the 27 variables were significant (>0.05). One factor emerged that was defined by variables of the three hypothesised constructs ‘Balanced Strategy’, ‘Flexible Organisation’ and Loosely Coupled Process’. This identified EFA factor was duly named ‘SOP’ and represented a factor defined by 7 of the proposed measurement variables. An examination of these 7 variables revealed that 5 were from the Explanatory Survey 198 scale for ‘Balanced Strategy’, and the remaining 2 were variables for ‘Flexible Organisation’ and ‘Loosely Coupled Process’. The EFA also indicated a second factor that was defined by the measurement scales for the two hypothesised constructs ‘Dynamic Information’ and ‘Interwoven Structure’. This second EFA factor was named ‘ISS’ and represented a factor defined by 6 of the proposed measurement variables. A closer examination of the 6 variables identified that 3 were from the proposed ‘Interwoven Structure’ scale, 2 were from the proposed ‘Dynamic Information’ scale and 1 was a proposed ‘Flexible Organisation’ measurement variable. In addition, the third and fourth factors identified by the EFA were mainly measured by the proposed construct Adaptive Enterprise. An examination of the scales that loaded onto each of the factors indicated that one factor was characterised by variables that measured intrinsic adaptive behaviours (AE1), whereas the other was characterised by variables that measured extrinsic adaptive behaviours (AE2). Hence, these factors were named AE1 and AE2 respectively. This meant that the factor AE1 scale was comprised of 6 variables, all of which were from the proposed Adaptive Enterprise construct, except for 1 variable that was from the proposed ‘Interwoven Structure’ construct. Likewise, the factor AE2 scale comprised of 5 variables, 3 of which were from the proposed Adaptive Enterprise construct. The remaining two variables were initially proposed for the ‘Loosely Coupled Process’ and ‘Dynamic Information’ constructs. Lastly, the 4 EFA identified factors were then tested using both a CFA followed by a SEM. As a result, for the purposes of the hypotheses testing and given the finding of the EFA, some of the first set of proposed hypotheses (refer Table 5.22) were amalgamated to create two overarching hypotheses. This meant that Hypotheses H1, H2, and H3, were combined and became H1A and Hypotheses H4 and H5 were combined and became H5A. In order to test the two overarching hypotheses H1A and H5A, the SEM latent variable regression standard estimates and their significance were used, as shown in Table 5.23. Hypothesis 1A (H1, H2, H3): Interpreting the SEM results depicted in Table 5.23, the partially mediated AZM Model (refer Figure 5.16) showed that a combination of a balanced strategy, flexible organisation, and loosely coupled process (SOP) has a direct positive effect Explanatory Survey 199 on an enterprise’s extrinsic adaptive behaviour (AE1) (β = 0.07, p < 0.046) and intrinsic adaptive behaviour (AE2) (β = 0.10, p < 0.032). Overarching Hypotheses: Adaptive Zone Hypothesis Model Hypotheses H1A: A balanced strategy, flexible organisation, and loosely coupled processes enable an enterprise to be adaptive. H1: A balanced strategy enables an enterprise to be adaptive. H2: A flexible organisation enables an enterprise to be adaptive. H3: Loosely coupled processes enable an enterprise to be adaptive. H5A: An interwoven structure supported by dynamic information enables an enterprise to be adaptive. H4: Dynamic information enables an enterprise to be adaptive. H5: An interwoven structure enables an enterprise to be adaptive. Table 5.22: First Set of Hypotheses Tested Path and Direction Hypotheses P <0.001 Regression Standardised Estimates SOP → AE1 (Direct) H1A (H1, H2, H3) 0.046 0.07 SOP → AE2 (Direct) H1A (H1, H2, H3) 0.032 0.10 SOP → ISS → AE1 (Indirect) H1A (H1, H2, H3) 0.000 0.37 SOP → ISS → AE2 (Indirect) H1A (H1, H2, H3) 0.000 0.26 SOP → ISS H1A, H5A 0.000 0.46 ISS → AE1 H5A (H4, H5) 0.000 0.80 ISS → AE2 H5A (H4, H5) 0.000 0.60 Note: β = standard estimate. S=Balanced Strategy, O=Flexible Organisation, P=Loosely Coupled Process, IS=Dynamic Information, S=Interwoven Structure, AE1= intrinsic adaptive behaviours, AE2=extrinsic adaptive behaviours Table 5.23: Latent Variables Standard Estimates and their Significance Used to Test the First Set of Hypotheses In addition, SOP has a moderately strong, indirect positive effect on both intrinsic adaptive behaviour (AE1) (β = 0.37, p < 0.001) and extrinsic adaptive behaviour (AE2) (β = 0.26, p < 0.001) when interwoven and su pported by dynamic information (ISS). Therefore, H1A is supported. Explanatory Survey 200 Hypothesis 5A (H4, H5): There is an extremely strong, positive effect on intrinsic adaptive behaviours (AE 1) (β = 0.80, p < 0.001) when an interwoven structure is supported by dynamic information, and similarly, an interwoven structure supported by dynamic information (ISS) has a strong positive effect on extrinsic adaptive behaviours (AE2) (β = 0.60 and p < 0.001) (refer Table 5.22). These results signify that H5A is supported. Hypotheses Tests: Behavioural Flows Hypotheses Model H6 to H9 The second set of hypotheses H6 to H9 (refer Table 5.24) were tested using the four constructs of SILR, DADL, AE2 and AE3, that resulted from the EFA. Hypotheses H6 to H8 sought to understand how each of the three behavioural flows of an AE (information, learning, control) enables an enterprise to be adaptive. This was explored by hypothesising that each AE behavioural flow has an optimal blend of deliberate and emergent flows at each of the organisational levels, based on the premise that the blended behavioural flows enable an enterprise to be adaptive. H9 hypothesised that interweaving the three behavioural flows into interwoven behaviour also enables an enterprise to be adaptive. The hypotheses H6, H7, H8 and H9 were empirically tested using Factor Analysis. Five constructs were initially proposed for the factor analysis of the Behavioural Flows Hypothesis Model, which included the EFA, CFA and SEM. The EFA (refer section 5.5.2.4) indicated however, that only 4 factors fitted the data, and 25 of the 28 variables were significant (>0.5). One factor emerged and was defined by variables from the three hypothesised Behavioural Flow constructs. This EFA factor was named ‘SILR’ and represented a construct measured by 4 of the variables. An examination of these 4 variables revealed that 1 variable was from the ‘Blended Information Flows’ scale, 2 were from the ‘Blended Learning Flows’ scale, and 1 was from the ‘Blended Control Flows’ scale. The EFA identified a second factor that was also defined by variables from the three hypothesised behavioural flow constructs. This second EFA factor was named ‘DADL’ and represented a factor defined by 6 of the proposed measurement variables. Closer examination of the 6 variables determined that 2 were from each of the measurement scales for the three Explanatory Survey 201 proposed constructs ‘Blended Information Flows’, ‘Blended Learning Flows’ and ‘Blended Control Flows’ scale. Additionally, the third and fourth factors identified by the EFA were mainly measured by the proposed Adaptive Enterprise construct. An examination of the scales that loaded onto each factor indicated that one factor was characterised by variables that measured extrinsic adaptive behaviours (AE2B) and the other by variables that measured intrinsic, interwoven, adaptive behaviours (AE3). Hence, these two factors were named AE2B and AE3 respectively. The factor AE2B was similar to factor AE2 and, as a consequence, was defined by 3 of the measurement variables from the proposed Adaptive Enterprise construct. Factor AE3 was defined by 12 variables, 3 of which were from the ‘Interwoven Behaviour’ construct scale, 4 from the three Behavioural Flow constructs’ scales, and 5 from the Adaptive Enterprise construct scale. After the results of the EFA, CFA, and then SEM used for testing the hypotheses, it was decided to amalgamate three of the second set (refer Table 5.1) to create one overarching hypothesis, so H6, H7, and H8 became H6A (refer Table 5.24). Hypothesis H9 remained unchanged from the initial hypothesis. Table 5.25 shows the SEM latent variable standard estimates, and their significance, used to test hypotheses H6A and H9. Overarching Hypothesis: Behavioural Flows Hypotheses Model Hypotheses H6A: Blended information, Learning and Control flows enable an enterprise to be adaptive. H6: Blended information flows enable an enterprise to be adaptive. H7: Blended learning flows enable an enterprise to be adaptive. H8: Blended control flows enable an enterprise to be adaptive. H9: Interwoven behaviour enables an enterprise to be adaptive. Table 5.24: Second Set of Hypotheses Tested As with the tests for the first set of hypotheses, the second set of hypotheses H6A and H9 were tested using the SEM latent variable regression standard estimates and their significance, together with the covariance standard estimate, all of which are shown in Table 5.25. Hypothesis 6A (H6, H7, H8): Interpreting the SEM results, depicted in Table 5.25 the SEM BFM Model (refer Figure 5.17) determined that a combination of blended information, Explanatory Survey 202 learning, and control flows (SILR and DADL) have moderately strong positive effects on an enterprise’s extrinsic adaptive behaviour (AE2B) (β = 0.40, p < 0.000 and β = 0.35, p < 0.000 as well as a strong positive effect on an enterprise’s intrinsic, interwoven adaptive behaviour (AE3) (β = 0.46, p < 0.000 and β = 0.51, p < 0.000). Therefore, H6A is supported. Path and Direction Hypotheses P <0.001 Regression Standardised Estimates SILR → AE2B H6A (H6, H7, H8) 0.000 0.40 DADL→ AE2B H6A (H6, H7, H8) 0.000 0.35 SILR → AE3 H6A (H6, H7, H8) 0.000 0.46 DADL→ AE3 H6A (H6, H7, H8) 0.000 0.51 Note: β = standard estimate. SILR=Sense,Interpret,Learn,Respond: DADL=Data Analysis, Decision Latency. AE2B=extrinsic adaptive behaviours, AE3=intrinsic, interwoven adaptive behaviours Table 5.25: The Latent Variables Standard Estimates and their Significance Used to Test the Second Set of Hypotheses Hypothesis 9 (H9): Given the results of the EFA, CFA and SEM solutions, the Interwoven Behaviour construct was subsumed and reflected by the AE3 latent variable, which represents an enterprise’s intrinsic, interwoven adaptive behaviour. This meant that the hypothesis was unable to be tested using factor analysis even though the CFA demonstrated that the latent variable AE3 had construct, convergent, as well as discriminant validity (refer section 5.6.1.9.1). In addition, the SEM revealed a covariance of 0.47 between the AE2B and AE3 latent variables. Therefore, the results of the factor analysis did not directly support the hypothesis that interwoven behaviour enables an enterprise to be adaptive. In summary, the two overarching hypotheses for the Adaptive Zone model were supported and the overarching hypothesis for the Behavioural Flows model was also supported. As mentioned H9 was unable to be tested (refer Table 5.26). Summary This chapter provides an in-depth description of the explanatory phase of the research. The purpose of the explanatory phase was to explore the concept of an AE through validation of the AE concepts, models, and hypotheses that emerged from the Delphi study. To facilitate the exploration and validation two Hypothesis models were conceptualised, and nine Explanatory Survey 203 Hypotheses were formulated based on the models. A survey instrument was constructed with the primary objective of generating data to test the hypotheses through FA. This objective underlies the design of the survey in regard to both the structure and selection of questions. After the survey was pilot tested, and accordingly refined, it was administered to 504 employees of a variety of organisations in the USA. Overarching Hypotheses: Adaptive Zone Hypothesis Model Hypothesis Hypothesis Test Result H1A: A balanced strategy, flexible organisation and loosely coupled processes enable an enterprise to be adaptive Supported H5A: An interwoven structure supported by dynamic information enables an enterprise to be adaptive. Supported Overarching Hypothesis: Behavioural Flows Hypotheses Model Hypothesis Hypothesis Test Result H6A: Blended information, Learning and Control flows enable an enterprise to be adaptive. Supported H9: Interwoven behaviour enables an enterprise to be adaptive. NA Table 5.26: Results of Hypothesis Tests The data collected by the survey instrument was assessed using the statistical factor analysis techniques of EFA, CFA, and SEM. The EFA was used to condense the survey data, which measured several relevant AE variables, into composite dimensions and identified the underlying factor structure. Next, the CFA was used to evaluate and identify a factor structure in terms of a priori pattern of factor loadings. These factor dimensions that were identified by the EFA, and confirmed by the CFA, enabled the SEM to be specified and evaluated. The results of the SEM were used to test the hypotheses and achieve the survey objectives. The main finding of SEM AZM Model revealed that for an enterprise to be adaptive the 3 key components of balanced strategy, flexible organisation and loosely coupled processes, together with an interwoven structure supported by dynamic information, are required. The main finding of SEM BFM Model indicated that blended information, learning, and control flows also enable an enterprise to be adaptive. Discussion of the Research Findings 204 6 Discussion of the Research Findings This chapter provides an in-depth discussion of the findings from the exploratory and explanatory phases of the research. The research resulted in insights that led to the development and validation of a series of concepts, models and hypotheses that could be used to guide the development of an AE as well as informing AE research and practice. Chapter 4 focuses on explaining the execution of the Delphi rounds, including the theoretical support for the implementation of the study, and the subsequent analysis process. The focus of Chapter 5 is on the implementation and statistical analysis of the survey that was used to test nine hypotheses. Some of the preliminary research outcomes from the Delphi and survey are outlined in Chapters 4 and 5 respectively, while this chapter focuses on synthesising the overall research findings obtained from the Delphi and survey. The structure of this chapter is outlined in Figure 6.1. Figure 6.1: Structure of the Discussion Chapter Discussion of the Research Findings 205 There were two overarching questions that motivated this research: What is an Adaptive Enterprise? and How does an enterprise adapt? Section 6.1 outlines the development and theoretical underpinnings of the five AE models. In sections 6.2 to 6.4 the Key Components Model, Structural Elements Model, and Behavioural Flows Model that were proposed to answer the first overarching research question are discussed. To answer the second overarching research question, the Adaptive Zone Model and the Adaptive Enterprise Transformation Cycle are discussed in sections 6.5 and 6.6 respectively. In section 6.7 the enablers, disablers and characteristics of an AE are presented. The models and cycle discussed in sections 6.1 to 6.6 were primarily derived, refined, and validated through a review of the literature, the Delphi Study, and peer reviewed publications. Two of the key models namely, the Adaptive Zone Model and Behavioural Flows Model, were further refined and validated through an explanatory survey study. The validation of these two models also helped to validate the foundational concepts of adaptive, deliberate, and emergent approaches, as well as the Key Components Model. The hypothesis tests and results in regard to the concepts and models are discussed in section 6.8. The chapter ends with a synopsis of the findings from the explanatory phase of the research and includes additional thoughts and perspectives about AE in section 6.9. The next section provides an overview of the development of the AE models and discusses the theoretical underpinnings of these models and this research. 6.1 Adaptive Enterprise Models A comprehensive selection of literature on enterprise adaptation was reviewed in Chapter 2. As part of this review concepts of importance to AE were explored, with some fundamental AE domain issues and requirements being discovered. These issues and requirements were considered and found to be knowledge gaps that need to be addressed to support development of a coherent understanding about AE (refer section 2.7). The common theme that emerged from the review and evaluation of the literature was that there is a lack of concepts, models Discussion of the Research Findings 206 and hypotheses to support research into, and the practice of, AE. This lack provided the impetus and reasons for conducting research into AE. The literature review included various relevant transformation approaches, models, and frameworks, such as the MIT90’s Framework (Scott Morton, 1991), the SIDA Adaptive Enterprise (Haeckel, 2016), the Strategic Management Process (SAP, 2000), the ARIS framework (Sheer, et al., 2003) and Edge of Chaos (Quinn, 1985; Eisenhardt & Brown, 1998; Sheer, 2007), to name a few. Conceptually these models are the same, and have a common purpose, although some of the models and frameworks are at a higher level of abstraction than others. Furthermore, there are certain key components, structures and behaviours that are common to each, which indicates that plausible research on AE should account for, and be grounded in a similar premise. Hence, the literature review informed the Delphi study from which the concepts and models of an AE emerged through an incremental development approach, as outlined in sections 3.5.2, and included being refined and validated by experts, which is discussed in sections 4.2 to 4.4. As mentioned, the five AE models were created using an incremental model development process. The process was initiated with the creation of the foundational concepts that are integral to the 1) Key Components Model. This model was the fundamental building block for the 2) Structural Elements Model, which was further developed to create the 3) Behavioural Elements Model, which in turn, informed the 4) Adaptive Zone Model. Next, the 5) enablers, disablers and characteristics of an AE were determined and, together with the other models, were incorporated to create the 6) Adaptive Enterprise Transformation Cycle, as depicted in Figure 6.2: The Incremental Development of the AE Models. Figure 6.2: The Incremental Development of the AE Models Discussion of the Research Findings 207 The AE models proposed and tested in the Delphi are grounded in four seminal theories. These theories are representative of the basic structural and behavioural elements of an AE. The first theory is Anthony’s (1965, 1988) organisational management framework, which identifies three levels for organisational management and control: the strategic, tactical, and operational levels. These three levels of an organisation are now well recognised in academia and industry and have evolved into commonly accepted management and control/decision levels (Sadagopan, 2014). The second seminal theory as proposed by Mintzberg and Waters (1985), suggests that traditional strategy implementation focuses on deliberate strategies, whereas some organisations implement strategies before they are even formulated. Strategies that unfold in this way are emergent strategies, and most organisations use both deliberate and emergent approaches. The third theory, the sense-and-respond enterprise, is suggested by Haeckel (1995, 2004, 2016) and Gothelt and Seiden (2017) and is implicitly reflected in the AE models proposed in the Delphi study. The fourth, and final influential theory, which over time has been universally accepted and is often referred to as general systems theory, is Norbert Wiener’s (1948) Cybernetics. A cybernetic system is comprised of adaptive, self-stabilizing, inter-systemic and intra-systemic hierarchies, and can be simply described as a regulatory system within a system (Von Bertalanffy, 1968). In addition to the aforementioned seminal theories, the models are based on a ‘vision to action’ cycle that is facilitated by Anthony’s (1965, 1988) organisational management framework. The vision to action model (Peko, Dong & Sundaram, 2014) argues that it is the vision of an AE that guides its strategy, which in turn will drive the tactics. These tactics determine the operations, which then become the actions of the enterprise. This model of vision to action represents the basic structural and behavioural elements of an enterprise and, together with the four key components (SOPI), is the foundation for all the models proposed in the Delphi (refer Figure 6.2). All the AE models were developed from the results of the first round of the Delphi study as well as being informed by relevant literature. In the second round of the Delphi the Key Components model was evaluated and validated by the experts who rated its factors in terms of their importance whereas the other four AE models were evaluated and validated by the degree of agreement. Discussion of the Research Findings 208 The following sections 6.2 to 6.4 explore three of these models beginning with the Key Components Model (refer section 6.2), followed by the Structural Elements Model (refer section 6.3) and lastly, the Behavioural Flows Model (refer section 6.4). These three models were proposed to answer the first overarching research question “What is an Adaptive Enterprise?” The key components are considered in terms of the deliberate, emergent, and adaptive approaches. It is then discussed how these components can be interwoven into the enterprise’s adaptive dimensions, as illustrated by the other AE models derived from the Delphi. 6.2 Key Components Model The Key Components Model is integral to the other models and artefacts that were generated from the first round of the Delphi, and subsequently included in the later rounds. The model is parsimonious in its conceptualisation of four components of an AE namely: Strategy, Organisation, Process, and Information (SOPI) as shown in Figure 6.3. Figure 6.3: The Key Components Model The key components were derived from a set of factors that were synthesised from the themes that emerged in the first round of the Delphi (refer section 4.3.1.3 and Appendicies A7 and A8). These factors were then rated in terms of their importance in the two subsequent Delphi rounds through the use of a five-point Likert Scale. As well as the findings of the Delphi, the four components are supported by a mature body of knowledge found in academic and industry literature (refer Chapter 2) that spans several disciplines, which includes management (strategy and organisational structures), procedural OM (business processes), and IS. Discussion of the Research Findings 209 Table 6.1 and Table 6.2 show each of the SOPI components’ factors ranked according to their importance rating results generated from Delphi rounds two and three, with priority given to the Round Three ratings. Table 6.1 also shows the ‘Strategy’ factor, Factor 6 (“Has ONLY a purely proactive approach to strategy development that anticipates environmental changes and potentially leads the market in innovation and practices”), which did not achieve the level for consensus as measured in the rating results (Appendicies B4 and C3) This suggests that an AE does not engage in a purely deliberate approach to strategy development. Moreover, interpreting the result together with that of Factor 2 and Factor 7, it is reasonable to conclude that an AE adopts a deliberate-emergent (adaptive) approach to strategy development that enables it to anticipate and react to environmental changes. This notion is supported by the panel of experts’ qualitative comments which suggest that the strategy mix (a blend of proactive and reactive) depends on the operating context of the enterprise, which should employ a more reactive blend in more dynamic environments. All the factors of the Organisation component reached the required level of consensus (refer Table 6.1). The qualitative data from the experts emphasised the need for an adaptive approach and it was suggested that the adaptive mix, a blend of deliberate and emergent, could be maturity dependent. The expert panellists generally agreed that enterprise units should be able to plan and operate independently but within the organisation’s broader strategic direction and guidelines. Similarly, innovations should be organisationally goal driven rather than promoted by the desire of an individual unit. An insight that emerged was that an organisation can potentially use different management and control strategies depending on its stage of maturity. For instance, the ‘tell’ methodology (strong leadership) may be critical during the ‘infant’ stages of AE change. However, once the enterprise has developed and moved into the next stage of maturity, a decentralised structure and innovation are more effective since this drives a new and engaging organisational culture. It is posited that the maturity of an AE will develop and renew through adapting so that the enterprise can survive. Discussion of the Research Findings 210 Table 6.1: Importance Ranking of AE Key Component Factors Strategy and Organisation Importance Ranking of AE Key Component Factors Strategy Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 1 Establishes strategies by planning and review that are clearly articulated to achieve the organisation's vision. 4.04 1.00 4.43 0.69 2 Has both proactive and reactive approaches to strategy development (evaluating if need to innovate, need to follow). 4.14 0.89 4.36 0.62 3 Has clear strategic goals, objectives and focus (business outcomes that meet stakeholder or shareholder expectations). 4.32 0.90 4 Has enterprise-wide development of strategy that incorporates more than just senior management's trusted strategies and stated goals. 4.11 0.69 5 Has adaptation processes that are strategically planned and designed. 3.82 1.02 4.04 0.64 6 Has ONLY a purely proactive approach to strategy development that anticipates environmental changes and potentially leads the market in innovation and practices. 3.54 1.07 2.46 0.84 7 Has ONLY a purely reactive approach to strategy development where incremental changes are used to meet current and short-term future requirements. 2.68 1.12 2.07 0.94 Organisation 1 Has effective leadership to realise opportunities. 4.71 0.46 2 Has an organisational culture that embraces change. 4.57 0.57 3 Has senior management who are sensitive to environmental changes. 4.46 0.74 4 Has organisational freedom to innovate. 4.46 0.69 5 Has macro level restructuring, creation of roles and rules at the meso (middle) level, and mechanisms in place to support adaptation at the micro level. 4.43 1.20 3.61 0.74 6 Has an open culture that seeks and trusts the opinions of organisational participants. 4.36 0.73 7 Has a shared organisational vision with tangible outcomes. 4.32 0.82 8 Has adaptation parameters that are delegated to all management levels with the authority to adapt. 4.29 0.71 9 Has 'new school' participative and collaborative management structures. 4.04 0.84 10 Has a decentralised structure that allows the organisation's functions to interact with the external environment. 3.79 0.74 11 Has regional adaptation that is delegated but head office monitors for control. 3.79 0.88 3.75 0.84 12 Has 'old school' command and control management structures. 1.89 0.96 1.54 0.64 Discussion of the Research Findings 211 Table 6.2: Importance Ranking of AE Key Component Factors Process and Information Several interesting insights were drawn from the qualitative data about the ‘Process’ and ‘Information’ components (refer Table 6.2). The experts agreed that the enterprise process and information model was both enterprise and context dependent. It was suggested that an AE Importance Ranking of AE Key Component Factors Process Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 1 Has the capability to change its processes in order for the enterprise to adapt. 4.54 0.69 2 Has a shared understanding and knowledge among all enterprise participants about processes in relation to business requirements. 4.07 0.77 4.32 0.61 3 Has intra-firm processes that give access to resources and capabilities to be utilised as and when required. 4.07 0.72 4.32 0.55 4 Has processes that are accessible and transparent (process visibility). 4.25 0.75 5 Has BPM that strategically plans and designs cross-functional processes. 3.89 0.88 4.21 0.83 6 Has flexibility in the way it conducts its core business. 4.18 0.90 7 Has streamlined processes to remove costs and provide value to customers. 4.18 0.86 8 Has efficient, standardised processes at the operational level. 4.04 0.88 9 Has inter-firm (between firms) processes that give access to resources and capabilities to be utilised as and when required. 3.93 0.86 3.96 0.79 Information 1 Has integrated enterprise-wide IS and data that is accessible by all employees. 4.18 0.98 4.75 0.44 2 Has the requisite ICT infrastructure to be able to respond to changing demands. 4.64 0.56 3 Has externally orientated knowledge and information seeking and capture activities. 4.18 0.86 4.39 0.63 4 Has both internal and external reliable, real time data. 4.32 0.72 5 Has information and knowledge management as a capability. 4.25 0.75 6 Has the requirement IT investments must have a supporting information management plan. 3.96 0.84 4.11 0.69 7 Has advanced business network systems such as a Supply Chain Management system. 4.07 0.86 8 Is consistently an early adopter of technology. 3.18 0.90 3.21 0.74 Discussion of the Research Findings 212 needs to develop a logical model for its specific industry/environment due to the varying degrees of industry dynamism, and the complexity of the enterprise. There was agreement about how each level of the enterprise requires access to different types of information, but that the information accessibility structures should not inhibit AE creativity. Lastly, an AE needs to be aware of the risks associated with ‘bleeding edge’ technology, particularly if it is at the expense of an adequately working IS. Also, any innovation needs to first be carefully considered as it should not jeopardise the good faith felt by employees and other stakeholders. Recent literature has highlighted an integrated perspective of the key components of an enterprise. This integrated perspective, together with the proposed deliberate, emergent, and adaptive approaches, forms the basis of the Key Components Model that was validated by the findings of the Delphi study. In the first round of the Delphi the four components of strategy, organisation, process, and information were identified and defined through various representative factors. These factors broadly described the principal deliberate, emergent, and adaptive structural and behavioural aspects that are associated with each of the components in the context of an AE. Each of the factors was then subjected to an importance rating in Round Two and Round Three of the Delphi in order to test their validity. The results are shown in Table 6.1 and Table 6.2. There are many theoretical and practical approaches when defining the key components of an AE. However, given the exploratory nature of the Delphi, and with reference to the relevant literature, these operationalised component factors are not purported as being all-inclusive. Rather, they are representative of the deliberate, emergent, and adaptive structural and behavioural aspects. 6.3 The Structural Elements Model The three organisational levels with their key components (SOPI) form the Structural Elements Model (refer Figure 6.4). The model is the conceptualisation of two primary structures of an AE. First, it depicts the strategic, tactical and operational levels of an enterprise. Second, it explicitly illustrates the relevant SOPI components for each of the three levels. In addition, it is implicit in the model that a greater, macro SOPI exists for the enterprise as a whole. Hence, while the model depicts the three organisational levels it also Discussion of the Research Findings 213 explicitly and implicitly demonstrates that there is an organisational system within a system, which is representative of a cybernetic system structure. Figure 6.4: The Structural Elements Model Most enterprises adopt an hierarchical management and control structure in which long term strategic decisions are made at the top level of the organisational structure, middle term tactical decisions are made at the middle level, and the day-to-day operational decisions take place at the bottom of the structure hierarchy (Wiener, 1948; Von Bertalanffy, 1968; Anthony, 1988; Scott Morton, 1989; Sadagopan, 2014; Ho, 2015). The premise of the Structural Elements Model is that each organisational level is a microcosm that has interwoven key components of strategy, organisation, process, and information. This microcosm engages the interwoven SOPI components to facilitate relevant decision making, and the required actions, at each organisational level so that the objectives of that particular level can be achieved. Therefore, the interwoven SOPI components will influence, and be influenced by, the organisational level objectives, decisions, and actions with the implied understanding that the SOPI composition will be unique to each of the organisational levels. Discussion of the Research Findings 214 The Structural Elements Model was evaluated, refined and validated in the second round of the Delphi study with 86% of the experts indicating that they agreed, or strongly agreed, that proposed structural elements could help an enterprise to be adaptive (refer sections 4.3.4.2). This resulted in a validated AE artefact that will contribute to the development of a coherent understanding about AE and, in particular, the crucial structural elements of an AE. 6.4 The Behavioural Flows Model The Behavioural Flows Model conceptualises the proposed three primary behavioural flows (information, learning, and control) of an AE and is depicted in Figure 6.5. As previously mentioned, the model is part of an incremental series of AE models that resulted from the Delphi and so encapsulates the elements (Key Components Model) and AE structures as shown in the Structural Elements Model (refer Figure 6.4). An AE has information flows, learning flows, and control flows which have both a top down (deliberate) and bottom up (emergent) behaviour. These three primary behavioural flows are interwoven. The Behavioural Flows Model demonstrates the way an AE can adapt and achieve its vision. Figure 6.5: The Behavioural Flows Model This research and the research artefacts, that include the Behavioural Flows Model, is based on the premise that an AE is an adaptive system that senses, interprets and responds to its internal and external environments (Wiener, 1948; Von Bertalanffy, 1968; Anthony, 1988; Discussion of the Research Findings 215 Gorry & Scott Morton, 1989; Scott Morton, 1991; Mintzberg,1994; Scott & Davis, 2015; Haeckel, 1995, 2004, 2016). In the sense/interpret/respond context it is the decision making, facilitated by the enterprise’s structures and behaviours, that enables the AE to be adaptive (Dooley, 1997). The information flows are the principal mechanism that operationalise and support the sensing, decision behaviours. These information flows are an interwoven mix of deliberate and emergent information that permeate each level of the AE. The flows enable the enterprise to capture and then distribute information required for sensing its environment. Once information has been captured and disseminated throughout the enterprise, the information is then interpreted and the enterprise learns. These learning flows are then the mechanism which operationalises and supports the interpreting decision behaviours. The interwoven mix of deliberate and emergent, horizontal and vertical learning, flows throughout the enterprise and enables it to continually learn, create knowledge, and support decision making. The decisions are implemented, managed and controlled through the mechanism of the control flows. Interwoven, enterprise-wide control flows that are supported by information, learning, and knowledge allow the enterprise to respond appropriately to its environment. As a consequence, the interwoven information, learning, and control flows help each level of the organisation, as well as the organisation as a whole, to sense, interpret and respond. This results in decision making and adaptive behaviours that enable the enterprise to develop agility. The Behavioural Flows Model was evaluated and validated through the Delphi study, which also generated expert insights that contributed to the model’s refinement (refer section 4.3.4.6). In Round two 93% of the expert panel agreed, or strongly agreed, that the Behavioural Flows Model could help an enterprise to be adaptive (refer section 4.3.4.5). The model was refined and validated further through hypothesis testing facilitated by the survey and statistical factor analysis (refer section 5.5). This rigorous dual study, development process of the model generated a validated artefact that advances current knowledge by suggesting that an AE engages information, learning and control flows to enable it to adapt. Discussion of the Research Findings 216 In the following section the research artefacts that were proposed to answer the second research question of “How does an enterprise adapt?” will be discussed. These artefacts are the Adaptive Zone Model, which will be described in section 6.5, the Adaptive Enterprise Transformation Cycle that will be explained in section 6.6, and the Enablers, Disablers and Characteristics of an AE will be examined in section 6.7. 6.5 The Adaptive Zone Model The Adaptive Zone Model suggests that an enterprise needs to be balanced on the key components of strategy, organisation, process, and information to be truly adaptive. Balanced in this context is the balance between the deliberate approach and the emergent approach, which is conceptualised as an adaptive zone. This adaptive zone represents the optimal area in which an AE resides. The zone is situation dependent and therefore, the adaptive zone for an enterprise will vary depending on context. In addition, the balance for each of the four key components may vary even though the components are interwoven. The model is an amalgamation of the AE key components, the concept of deliberate/emergent/adaptive and an adaptive zone. The Adaptive Zone Model, as shown in Figure 6.6, is a further incremental step in the development process of the AE models. Figure 6.6: The Adaptive Zone Model Discussion of the Research Findings 217 The adaptive zone research artefact was primarily informed by the findings from the Delphi study and the influential and well recognised work by Eisenhardt and Brown (1998), Scheer (2007), and Haeckel (1995, 2004, 2016). Eisenhardt and Brown (1998) contend that organisations should compete on the edge where there is a perfect balance between being over structured and under structured, where being on the edge allows the organisation to keep pace with change and successfully alter its strategic course over time. Similarly, Scheer (2007) posits that there is an edge of chaos space (refer section 2.3.3), which an organisation should populate. This edge of chaos is where organisations are able to improvise and be their most creative, and respond effectively to rapid change in their environments (Quinn, 1985; Eisenhardt & Brown, 1998; Sheer, 2007). Lastly, Haeckel (1995, 2004, 2016) and Gothelt and Seiden (2017) argue for an adaptive business design, which incorporates a sense and respond approach to manage the interactions of organisational system components (deliberate), while allowing the content of those components to be managed at the sub-system level (emergent). Taking into account the aforementioned concepts in terms of the Adaptive Zone Model, the concepts are reflected as a fluid organic state (zone) where an enterprise demonstrates a ‘Balanced Strategy’, ‘Flexible Organisation’, ‘Loosely Coupled Processes’, and ‘Dynamic Information’. The Adaptive Zone Model was first evaluated and validated through the Delphi study, which also yielded expert insights that contributed to refinement of the model (refer section 4.3.4.10). The model was further refined and validated through hypothesis testing facilitated by the survey and statistical factor analysis (refer section 5.5). This resulted in a validated AE artefact that extends current knowledge by suggesting that an adaptive zone exists in which each of the SOPI components are optimised in terms of the adaptive approach. 6.6 The Adaptive Enterprise Transformation Cycle In section 2.6 several transformation models were reviewed that related to the successful evolution of an organisation over the long term. According to Reeves et al. (2015), an enterprise, in order to transform, needs to interweave the ability to adjust “…into the fabric of the enterprise.” (p. 76). Hence, a key purpose of transformation models is to conceptualise the components, structures, and processes that are involved in transforming an organisation. Regardless of whether the perspective is from the macro organisational level (SAP, 2000; Discussion of the Research Findings 218 Kumaran et al. 2007; ARIS, 2009; Ahmed & Sundaram; 2012) or it is a micro level perspective (IBM, 2006; SAP, 2009), most organisational transformation models adopt a cyclical approach. There is a scarcity of transformation models that incorporate structures and behaviours with deliberate, emergent and adaptive (deliberate-emergent) approaches, despite several academic and industry AE experts advocating a need for this type of conceptual framework (Tallon, 2008, Reeves et al., 2015; Weichhart et al., 2016). In response to this need, the Adaptive Enterprise Transformation Cycle has been developed. It is a dynamic model that proposes an iterative process approach to facilitate the transformation of an enterprise into an AE. The Adaptive Enterprise Transformation Cycle, as shown in Figure 6.7, is a combination of the other AE models, including the enablers, disablers, and characteristics which have been developed through the Delphi research. The cycle also includes the dynamic relationships between the models, which facilitate the transformation process. Figure 6.7: The Adaptive Enterprise Transformation Cycle Discussion of the Research Findings 219 As with the other AE models, the Adaptive Enterprise Transformation Cycle was created using a rigorous Delphi development process, which involved evaluation and validation through group consensus by the panel of AE experts. This consensus indicated that 86% agreed or strongly agreed that the transformation cycle enabled an enterprise to be adaptive, and 82% agreeing/strongly agreeing with the relationships between the model components. The Delphi also generated expert insights that contributed to the model’s refinement (refer section 4.3.4.13). The AE transformation model can provide guidance to both academic and practice research. However, it should be particularly useful for practitioners because it offers insights for those who are engaged in AE transformation. As mentioned previously, the model illustrates key AE elements and structured cyclical phases, supported by the identification of enablers, disablers, and characteristics of the transformation process. The importance rankings associated with each of the enablers, disablers, and characteristics offer additional insights. Hence, the Adaptive Enterprise Transformation Cycle, together with the other AE concepts and models, make a contribution in addressing the need for such models (Haeckel, 2016; Weichhart et al., 2016). In addition, the Delphi derived AE models offer opportunities for future research as the AE themes, components, enablers, disablers, and characteristics could be further refined and extended. The relationships and interdependencies among these elements can be investigated and validated by applying other qualitative and quantitative research instruments. As the Adaptive Enterprise Transformation Cycle is a dynamic model, it could be modelled as it is, or used as a systems thinking artefact in a simulated environment so that the interrelationships, that constitute the transformation cycle, can be examined and validated. This could result in further insights about AE, and mitigate the limitations of the current model, to provide an enhanced understanding of AE. 6.7 Enablers, Disablers and Characteristics of AE The Delphi study identified and facilitated the evaluation of AE enablers, disablers, and characteristics which, together with the other AE models, are integral components of the Adaptive Enterprise Transformation Cycle (refer Figure 6.7). The premise of these three Discussion of the Research Findings 220 artefacts is that a) enablers help an AE to transform its vision into action, while b) disablers prevent an enterprise from doing so, and c) as an enterprise becomes more adaptive it begins to exhibit the characteristics of an AE. These enablers, disablers, and characteristics were identified in the first round of the Delphi and then rated on a five-point Likert Scale in regard to their importance in the following two rounds. The rating results are shown in Table 6.3 – Table 6.7 (Enablers), Table 6.8 (Disablers) and Table 6.9 (Characteristics), together with the individual enablers, disablers and characteristics. The experts were also given the opportunity to comment on them as part of the evaluation procedure. The enablers, disablers and characteristics will be discussed in the following sub-sections. Figure 6.8: The Enablers, Disablers, and Characteristics of AE Enablers of an AE As mentioned, enablers help an AE to transform its vision into action and so are inherent in the key components of an AE. The proposed enablers from Round One of the Delphi were collated into five categories according to an emerging overarching AE theme that was representative of the enablers in each category (refer section 4.3.1.3). The five categories/overarching themes are: Sense and Respond, Flexibility, Integrated Systems, Tools and Methodologies, and People. These categories did not have an equal number of enablers in each. The ‘Flexibility’ category had 9 enablers, ‘Integrated Systems’ had 7, the ‘Sense and Discussion of the Research Findings 221 Respond’ and ‘Tools and Technologies’ categories had 4 each, and the People category had 7 enablers. The 31 identified AE Enablers (aka factors) are shown in Table 6.3 - Table 6.7. Table 6.3: Importance Ranking of Sense and Respond AE Enablers Table 6.4: Importance Ranking of Flexibility AE Enablers They are grouped according to the five categories and in order of importance within those categories, with priority given to the Round Three ratings. The tables also show the Delphi round in which each enabler achieved the level of consensus, as measured by the rating results. The highlighted results for the enablers number 13 and number 20 did not achieve consensus. Given that 55% of enablers achieved consensus in the second round of the Delphi followed by Importance Ranking of Sense and Respond AE Enablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 1 Formal and informal, vertical and horizontal, internal and external communication networks. 4.54 0.58 2 Monitor internal and external environment to detect changes that are then captured, shared and subsumed by the enterprise 4.43 0.69 3 Performance Measures (KPIs) which enable monitoring and incentives that encourage responsiveness. 4.07 0.86 4 Analysis and reporting on the internal and external environment of the organisation. 3.93 0.66 Importance Ranking of Flexibility AE Enablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 5 Decision making freedom according to role and responsibility. 4.54 0.64 6 Flexible policies for each part of the organisation that allow it to change. 4.11 0.96 4.29 0.46 7 Support systems to encourage experimentation. 3.96 1.04 4.21 0.69 8 Skilled workforce that can be redeployed. 4.18 0.82 9 Broad parameters of expected behaviours rather than rules. 4.04 0.69 10 Staff are given autonomy to make changes as required. 3.68 0.82 3.93 0.66 11 Encouragement for risk-taking behaviour in new ventures. 3.57 0.92 3.82 0.77 12 Liberal human resource policy to allow for staff rationalisation. 3.29 1.05 3.50 0.75 13 Decisions are enforced through financial and managerial sign offs. 3.04 1.00 3.42 0.92 Discussion of the Research Findings 222 94% in the third round, it is reasonable to conclude that the enablers were strongly supported by the experts and, therefore, were validated. Table 6.5: Importance Ranking of Integrated System AE Enablers Table 6.6: Importance Ranking of Tools and Technologies AE Enablers One of the enablers, where the ratings did not reach the required level for consensus, was number 13 the ‘Flexibility’ enabler “Decisions are enforced through financial and managerial sign offs” (refer Table 6.4). This indicates that a purely deliberate management and control mechanism might be considered a constraint on AE flexibility and, therefore, the ability of an AE to adapt. However, the final rating result (refer Appendix C3) was close to the consensus Importance Ranking of Integrated System AE Enablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 14 Link and align enterprise architecture (processes and IT infrastructure) and work programmes with the organisation's strategy. 4.14 0.80 4.32 0.77 15 Have a system’s perspective to manage change while taking into account the organisation's capabilities. 4.25 0.75 16 Review of performance and implementation of action plans that align with desired business outcomes. 3.93 0.86 4.25 0.65 17 A network view of value creation since an enterprise does not operate in isolation. 4.18 0.67 18 Careful selection of trading partners. 4.04 0.88 4.18 19 People have an organisation-wide understanding in regard to the work they perform. 4.00 0.77 20 Bring about incremental change through realignment of strategy, resources, processes and technologies. 3.82 0.77 4.00 0.77 Importance Ranking of Tools and Technologies AE Enablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 21 Appropriate tools and methodologies to implement change. 4.32 0.77 22 Staff have access to appropriate, real-time data. 4.29 0.71 23 Proactively explore new options and technologies to construct systems for a future environment. 4.11 0.83 4.29 0.53 24 Leverage business intelligence to form future scenarios that are not obvious based on current business trajectory. 4.14 1.01 4.18 0.94 Discussion of the Research Findings 223 cut off which indicates that the expert panel did not dismiss the proposed enabler outright which suggests that some deliberate-emergent level of management and control is still associated with AE flexibility. Table 6.7: Importance Ranking of People AE Enablers Number 20, the second enabler that did not reach the required level of consensus, was the ‘Integrated Systems’ enabler “Bring about incremental change through realignment of strategy, resources, processes and technologies” (refer Table 6.5). However, the final rating result (refer Appendix 1) was close to the consensus cut off so although the expert panel did not reach an outright agreement it could be interpreted that an AE realises more than just incremental change, and that other types and degrees of change may also be achieved through the realignment of the SOPI components. The results of the statistical measures indicate that the experts endorsed 94% of the enablers, and the qualitative data collected about the enablers revealed valuable insights. These insights emerged for each of the five category/overarching themes and enriched the understanding about the various enablers and their contribution to the development of an AE. A synopsis of the incipient insights is presented in the following sub-section. Importance Ranking of People AE Enablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 25 A culture that engenders employee engagement, motivation, and pride in the organisation. 4.64 0.68 26 Leaders have persuasive skills to ensure employees can be ‘brought on board’ with the change. 4.00 1.15 4.54 0.51 27 Employees are proactively involved in organisational change. 4.39 0.57 28 Leaders constantly encourage change rather than having a 'businessas-usual' mentality. 4.21 0.74 29 People have the ability, and power and authority, to make and implement decisions. 4.21 0.63 30 Employees who embrace change. 4.16 0.75 31 Political adeptness to know when to use qualified support such as consultants to achieve outcomes beneficial to the organisation. 3.64 0.83 3.89 0.63 Discussion of the Research Findings 224 In terms of an AE as a ‘Sense and Respond’ organisation, the experts suggest that information capturing, analysis, and reporting, guided by focused performance measures is critical. For performance measures to be effective, they should be focused on the most important internal and external environmental success factors. They need to have a positive orientation to incentivise both emergent, autonomous, and creative responses, as well as collaborative, strategically aligned responses. In addition, an AE is constantly learning at both the individual and enterprise level since learning is intrinsic to the information capture/analysis/reporting and informed response cycle (refer section 2.4) The human element is common to several of the proposed enablers. For example, employees are an important aspect of the ‘Integrated Systems’ enablers because this research advocates an AE as an organisational system. These human-related enablers elicited several insights from the experts about people in organisations. They explained that AE employees should have an organisation-wide understanding about the work they perform while being permitted to deviate from this understanding when required. This insight was aligned to some of the ‘Flexibility’ enablers. Flexibility defines an AE and is embedded in all aspects of the enterprise to enable fast and appropriate responses. Flexible human resources (HR) is, therefore, of particular importance and the experts suggest that it is enabled by liberal HR principles, supported by HR policies and practices that are situation dependent, and moderated by evaluation and accountability. For example, it is considered important to allow employees of all levels to have decision freedom, albeit within guiding parameters and enforced through loosely defined roles and accountability. This type of liberal HR policy is in line with the ‘edge of chaos’ management and control structures as advocated by Scheer (2007). The need for strong leadership skills to hone enterprise resources and navigate in the right direction was proposed as one of the principle ‘People’ enablers. In the context of an AE, the experts suggested that strong leadership engenders an open, change culture and effective decision making so that constant change is translated into continuing success. To support these requirements, leaders engage with external resources and seek a centralise/decentralise, contractor/employee resource model. An expert’s insight from the ‘Tools and Methodology’ category/theme highlighted that it is more about how enablers are used, rather than the tools/methods per se. The analogy was “I Discussion of the Research Findings 225 think it's more important to consider 'how' the tools/methods are used. Give me a Stradivarius violin and the music I make will be atrocious; yet, in the right hands, the music could be some of the sweetest ever heard. Similarly, a master craftsperson could, even when armed with poor tools and methodologies, probably create the adaptiveness that they seek.” This reflection, that it is more important to have the requisite technologies and tools to be able to respond to changing demands, supports the Delphi finding that an AE does not necessarily have to always be an early adopter of technology (refer Table 6.2). Disablers of an AE In Round one of the Delphi 13 disablers were suggested and then rated, in subsequent rounds, according to their importance. Disablers are considered to have the potential to hinder an AE from transforming its vision into action. As with the enablers, the disablers were also considered to be inherent in the key components of an AE (SOPI). Table 6.8 shows the disablers that were identified and the Delphi round in which consensus was achieved, as measured by the rating result for each disabler. In the table, the disablers are also ranked in descending order according to their importance rating mean score, with priority given to the Round Three ratings. All but two of the disablers reached the required level of consensus in the second round of the Delphi. Disabler number 12 “Ruthless business practices” (as highlighted in Table 6.8) did not achieve a consensus in the third and final round. The experts suggest that in the short term these types of practices are unlikely to hinder the enterprise, but over time it could erode staff morale and may limit the organisation’s ability to adapt. Despite these implications, it cannot be inferred that ruthless business practices can be considered an outright disabler for an enterprise. Disabler number 13 “Lack of influence over external environmental factors” (as highlighted in Table 6.8) also did not achieve a consensus. Hence, from this result, it is reasonable to conclude that an AE does not have to influence its external environment in order to be adaptive. However, and most important, this would not necessarily imply that an AE is a market ‘follower’ rather than a market ‘leader’ but it could be inferred that an AE can be either a market follower or a market leader, or both. Discussion of the Research Findings 226 The final results of the quantitative measures for the disablers (refer Appendix C3) show that the experts ratified 85% of the proposed disablers. As with the enablers, the qualitative data about the disablers that was collected through the Delphi provided valuable insights. These insights focused on two interrelated areas of disablement: cost cutting and attention to short term goals at the expense of a longer term objectives. The experts argue that a cost focus, accompanied by a strong internal commitment to 'cut costs', draws attention away from opportunities and can lead to the development of the other disablers. They also suggest that disablers flourish in an environment that is governed by short term policies, which can lead to ruthless practices, instead of the focus being on developing longer term strategies. Table 6.8: Importance Ranking of AE Disablers Characteristics of an AE Similar to the Enablers and Disablers, the first round of the Delphi also yielded a comprehensive set of AE Characteristics. These characteristics were evaluated and validated Importance Ranking of AE Disablers Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 1 Lack of understanding about the environment. 4.50 0.64 2 Insufficient accountability and ownership. 4.25 0.97 3 Limited by only focusing on the current business environment. 3.89 0.74 4.14 0.65 4 Over regulated and restrictive internal and external environments. 4.11 0.88 5 Lack of general understanding of the business processes, routines and capabilities. 4.07 0.90 6 Not connected to effective business networks such as having a fractured supply chain. 4.04 0.92 7 Non-strategic rationalisation of staff and equipment. 4.04 0.84 8 Insufficient and misdirected capital expenditure. 4.00 0.98 9 Reactive HR policies such as staff layoffs or limiting incentives in times of change. 4.00 0.94 10 Lack of required resources to implement change. 3.93 0.90 11 Incomprehensive risk management. 3.75 0.84 12 Ruthless business practices. 3.54 1.14 3.60 1.22 13 Lack of influence over external environmental factors. 2.93 1.02 2.89 0.88 Discussion of the Research Findings 227 in Round Two and Round Three of the Delphi by rating them according to their importance and commenting on any aspect of a characteristic. The proposition presented to the experts regarding the characteristics was that as an AE becomes more adaptive it exhibits certain characteristics. Table 6.9 shows the 24 characteristics that were proposed and the Delphi round in which consensus was achieved, as measured by each characteristic’s rating result. In the table, the characteristics are ranked in descending order according to their importance rating mean score, with priority given to the third round ratings. The table shows that in the second Delphi round, 17 characteristics, or 71%, achieved the required level of consensus. In the third and final round 23 of the 24 characteristics, or 96%, were endorsed by the experts through consensus. The one characteristic that did not achieve consensus was, “Possesses capabilities and core competencies that are unique” (as highlighted in Table 6.9). This is an interesting result given that much of Prahalad and Hamel’s (1990) seminal work advocate the development of core competencies in order to gain competitive advantage. They purport that sustained superior performance can be attributed to a corporation perceiving itself “…in terms of core competencies…” (Prahalad & Hamel, 1990 p.79). It can be argued that a core competence view may have been relevant for the more stable environments of the latter part of last century when reconfiguration of specific resources enabled the required response in order to take advantage of strategic opportunities. However, in more recent times an enterprise needs to be “…more agile in responding to change.” (Talon, 2008 p. 21) which requires it to reinvent itself (Reeves et al., 2015) so merely reconfiguring current resources may not fulfil these requirements. Another insight of note was the lower rating consensus scores for the two AE characteristics of “Sustainable” and “Ethical business practices”. Although both characteristics achieved consensus in the third round of the Delphi, an expert made the comment that they may not be AE characteristics but rather “…a great thing for a company to be sustainable, ethical, lawabiding etc, I don't think those things are related to adaptability.” This is an interesting perspective that could perhaps be considered, particularly by practitioners. Discussion of the Research Findings 228 Table 6.9: Importance Ranking of AE Characteristics The following sections will discuss the findings of the explanatory phase of the research, which was conducted using a quantitative survey. The discussion begins with formulation of the hypotheses to test the key components, structural elements, behavioural flows, and the Importance Ranking of AE Characteristics Round 2 µ Rating Round 2 SD Round 3 µ Rating Round 3 SD 1 The organisation constantly learns and transfers knowledge. 4.64 0.56 2 Good governance. 4.14 0.85 4.54 0.58 3 Agile organisation that responds quickly and effectively to change. 4.50 0.51 4 Ability to change as required. 4.46 0.74 5 A shared living vision 4.11 0.99 4.46 0.64 6 Visionary leader willing to take risks and lead organisation in new direction. 4.39 0.69 7 Strong leadership team. 4.36 0.68 8 Innovative. 4.36 0.56 9 Robust management that involves both a risk and evaluative approach. 4.11 0.79 4.36 0.49 10 Customer focused. 4.32 0.94 11 Strategic – understands the “big picture”. 4.32 0.72 12 Engenders trust between internal and external stakeholders. 4.29 0.94 13 The enterprise is an holistic, integrated, system. 4.29 0.76 14 Core organisational values and beliefs that influence behaviours. 4.26 0.66 15 Empowerment that supports change. 4.25 0.65 16 Maintains robust, collaborative, stakeholder relationships. 4.18 0.67 17 Constant awareness of internal and external environments. 4.11 0.74 18 Aware and open about the organisation's strengths and weaknesses. 4.11 0.69 19 Promotes and maintains advantageous business networks. 4.11 0.63 20 Ethical business practices. 3.93 1.15 4.00 0.94 21 Compliant and operates in the spirit of the law. 3.79 1.10 22 Sustainable business practices. 3.75 1.04 3.89 0.74 23 A lean organisational philosophy. 3.54 0.74 4.57 0.57 24 Possesses capabilities and core competencies that are unique. 3.46 1.07 3.57 0.88 Discussion of the Research Findings 229 adaptive zone. The results of the hypothesis tests, using factor analysis (EFA, CFA and SEM), will then be discussed. Further Refinement and Validation of Concepts and Models Many of the research artefacts developed in the exploratory phase of this research were concepts and models that were comprised of constructs, relationships and interdependencies. To further refine, validate and explain these artefacts, nine hypotheses associated with two of the models, the Zone Model, and Behavioural Flows Model, were formulated and tested. The primary purpose of these tests was to illuminate, validate and extend the insights that had emerged from the Delphi study to advance understanding of an AE and how an AE adapts. To perform the hypothesis tests a survey instrument was developed and administered (refer sections 5.2 and 5.3), the results of which will be discussed in the following sections. This section introduces hypothesis testing, and in sections 6.8.1 and 6.8.2 the formulation of the hypotheses and results of the factor analysis, used to test the hypotheses, will be discussed. Finally, a synopsis of the findings from the explanatory phase, including additional thoughts and perspectives, will be presented in section 6.9. In the exploratory phase of the research AE models were created incrementally. Two representative models were then selected, the Adaptive Zone Model and Behavioural Flows Model, for further refinement and validation through hypothesis testing. These models conceptualised several foundational concepts that had emerged from the Delphi and illustrated the SOPI components, the AE structures, the behavioural flows, and the adaptive approach, as defined by an optimal position (zone) on each of the SOPI deliberate/emergent continuums. Hence, the models that were tested allowed for the simultaneous refinement and validation of these key concepts as shown in Figure 6.9. To enable the hypothesis testing, the model elements and their interdependencies were defined in terms of constructs, and the significant relationships between those constructs. Nine hypotheses were formulated and tested using statistical factor analysis techniques that included a rigorous analysis process. An EFA was initially performed to identify the underlying factors in the data set, followed by a CFA to confirm and refine the measurement model. Discussion of the Research Findings 230 Finally, SEM was employed to test the relationships between the constructs, which allowed the collected data to confirm or contradict the hypothesised relationships so that an explanation could be provided. In the following sub-sections, the hypothesis testing and results for the Adaptive Zone Model will be discussed, followed by a discussion about the Behavioural Flows model. The Adaptive Zone hypotheses were tested first because this model explicitly demonstrates the key components and adaptive approach, which are foundational concepts of the AE artefacts. Figure 6.9: The Concepts and Models Subjected to Hypothesis Testing Further Refinement and Validation of the Adaptive Zone Model As mentioned, a total of nine hypotheses were formulated with hypotheses H1 - H5 testing the Adaptive Zone Model propositions. The Adaptive Zone hypotheses H1 – H4 theorise that an enterprise will be adaptive when each of the SOPI components are optimised in terms of deliberate and emergent, resulting in a balanced strategy, flexible organisation, loosely coupled process, and dynamic information. Hypothesis 5 theorised that once the optimised components are interwoven into a structure this structure enables an enterprise to adapt. Therefore, the propositions were expressed as two hypothesis models (refer Figure 6.10). The models consisted of 6 constructs, the theorised causal relationships between them, and the variables used to measure them with hypotheses H1 – H4 forming AZM Model and H5 as the basis of BFM Model. Discussion of the Research Findings 231 Figure 6.10: The Adaptive Zone Hypothesis Models Constructs and Variables 6.8.1.1 The Hypothesis Test of the Adaptive Zone Model The hypothesis testing began with an examination of the Adaptive Zone constructs and measurement variables by using the data collected from the 504 survey responses. These responses were assessed using the factor analysis techniques of EFA followed by CFA, and finally SEM. 6.8.1.2 The EFA The result of the EFA assessment of the Adaptive Zone hypothesis models indicated a 4 factor solution that was based on the data and the underlying relationships between the 24 measured variables (refer sections 5.5.2.2 and 5.5.2.3). This result differed from the 6 factor (constructs) a priori hypothesis models (refer Figure 6.10). The 4 factors identified by the EFA were named according to the variables that loaded onto each of them and are shown in Figure 6.11 as SOP, ISS (Interwoven Structure and Systems), AE2 and AE1. The factor SOP (Strategy, Organisation and Process) was measured by 7 variables originating from the hypothesised constructs: Balanced Strategy (5), Flexible Organisation (1) and Loosely Coupled Process (1). The factor ISS was measured by the 6 measurement variables of: 3 Interwoven Structure, 2 Dynamic Information, and 1 Flexible Organisation. Lastly, the factors AE2 and AE1 were measured by the intrinsic adaptive behaviour and extrinsic adaptive behaviour measurement variables respectively (refer Figure 6.11), which originated from the hypothesised AE construct. Discussion of the Research Findings 232 Figure 6.11: The Redefined Constructs and Variables of the Adaptive Zone Hypothesis Models 6.8.1.3 The CFA Next, a CFA was performed to evaluate the measurement models by testing how well the measured variables actually represented the 4 factors (from now on referred to as latent variables for both the CFA and SEM). From an exploratory perspective the CFA provided insight as to which measurement variables were better than others when reflecting the dimensions of the theorised construct (Fur, 2011). There were 4 latent variables SOP, ISS, AE2 and AE1 and, apart from SOP, their respective measurement variables had high loading values for each, while only 6 of the 7 (86%) loadings were high for the SOP latent variable (refer sections 5.6.1.1 to 5.6.1.4). This overall result indicated that the constructs, as represented by the latent variables, were consistent with the researcher’s understanding of their nature, as informed by the Delphi study. In addition, the measurement model fit indices demonstrated goodness of fit and that the model’s composite reliability and construct validity were both valid (refer section 5.6.1.6). 6.8.1.4 The SEM Lastly, two SEMs specifying relationships among the latent variables (path structure), and measuring the strength of the relationships using parameter estimates, were examined (refer sections 5.7.1.1 and 5.7.1.2). Evaluation of the results showed that the fit indices demonstrated goodness of fit. The standard estimates between the variables were significant and ranged from moderately strong (0.47) to very strong (0.85), indicating that the relationships had significant Discussion of the Research Findings 233 causal effects (Table 5.16). These results suggest that the specified relationships were not only plausible but also theoretically justifiable. In addition, a SEM was respecified to assess the mediation effect of ISS, which was the mediator variable. This mediated model specified that SOP was mediated by ISS, which transmitted some of the causal effects of SOP onto AE2 and AE1. Like the unmediated model, the SEM results indicated a good model fit and, in addition, demonstrated that it was a partially mediated model. This meant that SOP had both a direct and indirect casual effect on AE2 and AE1 when ISS mediates the path between SOP and AE2 and AE1 (refer Table 5.16). 6.8.1.5 Results for the Adaptive Zone Model Hypothesis Test Interpreting the overall SEM results it can be concluded that the Adaptive Zone hypotheses were supported as shown in Table 6.10. Initially 5 hypotheses were formulated to test the propositions of the Adaptive Zone Model (refer section 5.9.1 and Table 5.21). However, the results of the EFA and CFA indicated that the dataset contained only 4 factors (constructs) so the 5 hypotheses were amalgamated as 2 overarching hypotheses (refer section 5.9.1 and Table 5.22). Given that these overarching hypotheses were supported by the results of the factor analysis (refer Table 6.10), it can be concluded that the hypothesis testing had fulfilled its purpose and, as a consequence, the Adaptive Zone Model was further refined and validated. In the following sub-sections the hypothesis test findings for the Adaptive Zone model will be discussed. In particular, the SEM results will be evaluated and the insights gained from the evaluation will be described. Several interesting insights emerged from the results of the factor analysis and hypothesis testing. Overarching Hypotheses: Adaptive Zone Hypothesis Model Hypothesis Hypothesis Test Result H1A: A balanced strategy, flexible organisation, and loosely coupled processes enable an enterprise to be adaptive H1, H2, and H3 Supported H5A: An interwoven structure supported by dynamic information enables an enterprise to be adaptive. H4 and H5 Supported Table 6.10: Results of the Adaptive Zone and Behavioural Flows Hypothesis Tests Discussion of the Research Findings 234 However, in regard to the hypothesised constructs, three insights were of particular importance. 6.8.1.6 Three Integrated Key Components The first important insight was in regard to the key components of an AE, which were hypothesised as the four distinct constructs of strategy, organisation, process, and IS with each being measured by a set of relevant variables. The variables represented the researcher’s understanding of the SOPI components that were developed from the literature review and findings from the Delphi study. However, the results of the factor analysis and hypothesis testing suggest that understanding of the key components could be viewed through a different lens and be further refined. Rather than SOPI being viewed as four distinct components that enable an enterprise to be adaptive, the analysis indicated an aggregated construct that is strongly influenced by strategy. This redefined construct was named SOP, and its dimensions were reflected by 7 of the original 12 SOPI variables. Five, or 71% of these 7 variables, were from the hypothesised Balanced Strategy construct while 1 (14.5%) was from the hypothesised Flexible Organisation construct and 1 (14.5%) was from the hypothesised Loosely Coupled Process construct. This result emphasises that a strategic driven orientation for an AE is critically important. It indicates that the key components should be considered as an integrated whole instead of each individual component having a significant influence on an AE. This finding is consistent with existing published studies that have found an AE and its design is, first and foremost, guided by its strategic direction and operating context (Stanford, 2007; Wittle & Myrick, 2016). While this is an important insight, it does not mean that the other components are not important as both structure and process (Reeves, et al., 2015) were reflected as dimensions in the new SOP construct, albeit to a lesser degree than strategy. 6.8.1.7 Information Systems Component The second important insight to emerge was about the fourth hypothesised key component Dynamic Information. Instead of this component being a part of the new SOP construct (Strategy, Organisation, and Process) it was subsumed by the Interwoven Structure construct, Discussion of the Research Findings 235 which was renamed ISS (Interwoven Structure and Systems) to reflect its nature as defined by the measurement variables. The Delphi results suggested that the hypothesised Interwoven Structure construct was defined by 4 measurement variables that reflect an interwoven structure made up of key components (SOPI) with strategy driving the design of BP, and organisational structures, all of which are supported by IS (Stanford, 2007). However, after factor analysis a redefined Interwoven Structure (ISS) provided a different perspective and indicated that the ISS construct consisted of 6 rather than 4 variables. Three of the variables (50%) originally belonged to the Interwoven Structure construct, with 2 of the variables (33.3%) from the Dynamic Information construct, and 1 variable (16.7%) originating from the Flexible Organisation construct. This result suggests that IS can contribute to an AE in a way that is not explicitly addressed in the literature to date. It is plausible to suggest that IS is of additional importance when it interweaves the key components of strategy, organisation, and process into a cohesive structure. This is an interesting and important aspect that extends the body of knowledge about AE as previous researchers seem to have had a more limited view. For example, Talon (2008) views an AE from the perspective of IT capability and attempts to describe AE agility in terms of strategy, organisation, process, and IT but does not acknowledge that IS can enhance these elements by weaving them so they are an integrated, multi-dimensional organisational system. Weichhart et al. (2016) takes a purely BPM, operational level, and technology perspective when positing the notion of IS having a lead role when bringing dispersed objects and activities together to facilitate in the development of a sensing, smart, enterprise. The seminal literature of an AE as being an organisational, cybernetic system (Wiener, 1948; Von Bertalanffy, 1968; Anthony, 1988; Gorry & Scott Morton, 1989; Haeckel, 1995, 2016) is used by Scott Morton (1991) and Scott & Davis (2015) who takes a systems theory perspective (Bertalanffy, 1968) and acknowledges the dimension of IS as an element of a larger organisational system. However, this perspective seems to be limited as it does not take a multi-role view of IS. It is the redefined multi-role of IS as an organisational systems enabler, with its ability to support an interwoven structure, that is new and so elevates the key SOPI component of Information to a unique level of importance. Discussion of the Research Findings 236 6.8.1.8 Intrinsic and Extrinsic Adaptive Behaviours The third important insight to emerge from the factor analysis and hypothesis testing was that the hypothesised Adaptive Enterprise construct is not just one but could be regarded as two separate constructs. Hence, these constructs were named AE2 and AE1. The hypothesised Adaptive Enterprise construct was originally measured by 9 variables. However, the two new AE constructs, AE2 and AE1, were measured by a combined total of 11 variables with AE2 being measured by 5 variables, and AE1 by 6 variables. Examination of the AE2 construct revealed that 3 of the 5 variables (60%) measured what could be described as dimensions of extrinsic adaptive behaviour, because they originated from the hypothesised Adaptive Enterprise construct. The remaining 2 AE2 variables (40%) were from the hypothesised Loosely Coupled Process construct and the Dynamic Information construct. The AE1 construct was measured by 6 variables that could be described as intrinsic adaptive behaviours. Five of these intrinsic behaviour measurement variables (83%) originated from the hypothesised Adaptive Enterprise construct, while the remaining variable was from the hypothesised Interwoven Structure construct. The finding that there are two Adaptive Enterprise constructs AE2 and AE1, suggests that there are two fundamental aspects to how an AE adapts. An AE adapts to both its internal and external environments and each environment is equally important. Therefore, an AE simultaneously utilises intrinsic adaptive behaviours for the internal environment and extrinsic adaptive behaviours for the external environment. There is some support in the AE literature for this notion but the support tends to be implicit rather than being explicitly mentioned. It seems to depend on the research domain as to whether an internal or external adaptive orientation is the focus. For example, in the business/operations domain AE research studies seem to have an external orientation where the focus is on how an AE adapts to its external environment. Much of Haeckel’s (2016) seminal work and that of other authors (Porter, 2002; Brown & Eisenhardt, 1997; Eisenhardt & Brown, 1998) treats the source of change as being external and so the emphasis is on adapting to the external environment. As Slywotzky (Foreword in Haeckel, 2016 p.xiii) explains “…sense-and-respond forces companies to look outside themselves to their customers …”. Furthermore, for successful participation in adaptive business networks/supply chains, possessing extrinsic adaptive behaviours is considered a Discussion of the Research Findings 237 strategic imperative (Heinrich & Betts, 2003; Dong, Peko & Sundaram, 2012; Dong, Peko & Sundaram, 2014). Conversely, the AE literature about managing an organisation tends to have an internal/intrinsic adaptive behaviour perspective (Mintzberg et al., 2005; Stanford, 2007; Tallon, 2008; Weichhart et al., 2016). There is a paucity of literature with an explicit internal and external environment focus, which includes identification of intrinsic and extrinsic behaviours that enables the enterprise to adapt simultaneously to its internal and external environments. Further Refinement and Validation of the Behavioural Flows Model A total of four hypotheses were formulated to test the Behavioural Flows Model (refer sections 5.1.2 and 5.1.2). Hypotheses H6 – H9 tested the propositions depicted in the two Behavioural Flows hypothesis models, which consisted of 5 constructs and their causal relationships as shown in (Figure 6.12). Hypotheses H6 – H9 tested the relationship between each of the three optimised behavioural flows constructs and the AE construct. The proposition being tested was that optimised (in terms of deliberate/emergent) information, learning, and control flows enable an enterprise to adapt. Also, H9 examined the relationship between the Interwoven Behaviour and Adaptive Enterprise constructs. It theorised that when the flows between these constructs are interwoven, this interwoven behaviour enables an enterprise to be adaptive. The Behavioural Flows hypothesis models are depicted in Figure 6.12 and show the 5 behavioural flow constructs and the measurement variables. 6.8.2.1 The Hypothesis Test of the Behavioural Flows Model The hypothesis testing of the Behavioural Flows Model used the same factor analysis techniques and dataset that were used to test the Adaptive Zone Model. Therefore, the hypothesised constructs and measurement variables of the Behavioural Flows Model were evaluated using the techniques of EFA, CFA and SEM. Discussion of the Research Findings 238 Figure 6.12: The Behavioural Flows Hypothesis Models’ Constructs and Variables 6.8.2.2 The EFA The result of the EFA assessment for the Behavioural Flows hypothesised models indicated a 4 factor solution even though the 5 priori criterion was 5 factors (refer Figure 6.12). This result was based on the data and the underlying relationships between the 25 measured variables, which included the variables of the AE construct (refer sections 5.5.2.4 and 5.5.2.5 and Table 5.7). The 4 factors identified by the EFA were named to reflect the variables that loaded onto each and are shown in Figure 6.13 as SILR (sense, interpret, learn, respond), DADL (data analysis, data latency), AE2B and AE3. The factor SILR was measured by 4 variables from the hypothesised constructs Blended Information Flows (1 variable), Blended Learning Flows (2 variables), and Blended Control Flows (1 variable). The DADL factor was measured by 6 variables from the hypothesised constructs Blended Information Flows (2 variables), Blended Learning Flows (2 variables) and Blended Control Flows (2 variables). AE2B was measured by 3 extrinsic adaptive behaviours variables from the hypothesized construct Adaptive Enterprise. Lastly, AE3 was measured by 12 variables from the hypothesised constructs: Blended Information Flows (1 variable), Blended Learning Flows (2 variables), Blended Control Flows (1 variable), Interwoven Behaviour (3 variables) and the Adaptive Enterprise intrinsic adaptive behaviour variables (5 variables). Discussion of the Research Findings 239 Figure 6.13: The Behavioural Flows Hypothesis Models Redefined Constructs and Variables 6.8.2.3 The CFA Following the EFA, a CFA was performed to evaluate the measurement models by testing how well the measured variables P (refer Table 5.12. Also, the measurement model fit indices indicated goodness of fit and the checks for composite reliability and construct validity were both positive (refer section 5.6.1.9). Hence, these results suggest that the understanding about the nature of the constructs, gleaned from the literature and Delphi study, is consistent with the constructs as defined by the measured variables. 6.8.2.4 The SEM Finally, the results of an SEM that specified the relationships among the latent variables, and measured the strength of those relationships using the parameter estimates, was assessed (refer section 5.7.2). This assessment showed that the fit indices were good. The standard estimates between the variables were significant and ranged from good (0.35) to moderately strong (0.51) which indicated that the relationships had significant effects and, from a theoretical perspective, they were acceptable. Discussion of the Research Findings 240 6.8.2.5 Results for the Behavioural Flows Model Hypothesis Test In order to test the Behavioural Flows Model 4 hypotheses were formulated (H6, H7, H8, H9) based on 5 constructs and the relationships between those constructs. The EFA indicated that just 4 factors (constructs) best characterised the underlying relationships in the data. So, in order to test the 4 hypotheses, H6, H7 and H8 were combined to become the one overarching hypothesis of H6A, as shown in table 6.2. The table shows that this overarching hypothesis was supported but given the results of the factor analysis, the remaining Behavioural Flows hypothesis H9 was unable to be tested. However, the CFA and SEM indirectly demonstrated that interwoven behaviour does have a positive influence on an AE (refer section 5.9.2). Overarching Hypothesis: Behavioural Flows Hypotheses Model Hypothesis Hypothesis Test Result H6A: Blended Information, Learning, and Control flows enable an enterprise to be adaptive. Supported H9: Interwoven behavior enables an enterprise to be adaptive. NA Table 6.11: Results of the Behavioural Flows Hypothesis Tests The results of the factor analysis hypothesis tests for the Adaptive Zone model will be discussed in the following sub-sections. In particular, the SEM will be examined and the insights gained from this evaluation will be described. Given the results of the factor analysis and hypothesis testing of the Behavioural Flows Model, several new insights were revealed. Three were of particular significance and directly related to the evaluation and re-specification of the four hypothesised constructs: Blended Information Flows, Blended Learning Flows, Blended Control Flows, and Interwoven Behaviour (refer Figure 6.12). 6.8.2.6 Integrated Behavioural Flows The first insight gained was that the blended behavioural flows, initially conceptualised as three distinct constructs, could be considered as part of one aggregated construct. This aggregated construct was named SILR and was measured by variables from each of the hypothesised behavioural flows constructs. Therefore, the results of the factor analysis Discussion of the Research Findings 241 suggested that the understanding of three behavioural flows could be reframed and further refined by conceptualising them as one interwoven construct. Interestingly, the dimensions of the SILR construct were defined by only 4 of the 14 variables (36%) from the hypothesised Blended Information, Learning, and Control flows. Of these 4 variables, 1 originated from each of the Blended Information Flows and Blended Control Flows constructs while 2 variables were from the Blended Learning Flows construct (refer Figure 6.13). This result highlights the integrated nature of the AE blended behavioural flows, as well as the cybernetic systems element, as the SILR construct’s variables measured sense-interpretrespond dimensions. However, the variables measured only sub-system integration at the departmental level which corresponds with the findings of more recent research studies. Although having integrated behavioural flows was suggested in the seminal literature (Argyris & Schon, 1978; Nielsen, 2003; Heinrich & Betts, 2003; Haeckel, 2016), the more recent studies have found that when there is integrated learning, participative decision making, knowledge management, and IS the organisation has greater levels of innovativeness and capacity to adapt. In these more recent studies, integration is explained as a balance between the freedom to differentiate structural elements at the sub--system level while still maintaining a high level of integration throughout the organisation (Hurley & Hult, 1998; Tosey et al., 2011; Jones, 2013). This perspective supports the research findings that validate the structural elements of an AE as proposed in this study (refer section 6.3). 6.8.2.7 Analysis and Decision Latency The second key finding was being able to identify a new construct that was named DADL (decision analysis, decision latency) to reflect its nature as defined by its measured variables. The construct was measured by 6 variables, of which 2 (33%) originated from each of the three hypothesised behavioural flows. These variables measured the dimensions of decision making both at departmental (sub-system) and enterprise level. This result indicates that decision making behaviours are of primary importance to an AE and could be considered as another dimension that defines the behavioural flows (Daft & Weick, 1984). The result also implies that AE decision making is embedded in the concept of decision Discussion of the Research Findings 242 latency supported by timely information. Hackathorn, (2004) posits that “real-time” must be understood in terms of “real value” and explains that highly responsive organisations not only respond with a sense of urgency, they also respond in an intelligent way that is enabled by real time decision support systems. 6.8.2.8 Multi-Dimensional Intrinsic Adaptive Behaviours Another important insight that emerged after the factor analysis and hypothesis testing of the Behavioural Flows Model is identification of the multi-dimension Adaptive Enterprise construct AE3. This construct was an amalgamation of the hypothesised Adaptive Enterprise and Interwoven Behaviour constructs. The Delphi results suggested that the hypothesised Interwoven Behaviour construct was defined by 6 variables that measured the integrated dimensions of sensing, understanding, learning, and responding at each organisational level, and the organisation as a whole (Anthony, 1988). However, a different perspective was gained after the factor analysis as it indicated that interwoven behaviours were not a unique construct but were a dimension of a greater intrinsic, adaptive behaviour construct namely, AE3. The dimensions of the AE3 construct were measured by 12 variables, 4 of which (33%), originated from the hypothesised behavioural flows constructs and 3 (25%), from the hypothesised Interwoven Behaviour construct (refer Figure 6.12 and Figure 6.13). The remaining 5 variables, or 42%, were the same measured variables from the Adaptive Zone, intrinsic behaviour construct AE1 (refer section 6.8.1.2). This result suggests that the intrinsic behaviours of an AE are both multi-dimensional and interwoven and feature elements of learning, information, and control behaviours and it is these aspects, combined, that enable an enterprise to adapt. This discovery reflects the seminal theories on which this research is based namely, Anthony’s (1965) organisational management framework, Mintzberg’s deliberate and emergent strategy (Mintzberg & Waters, 1985), Haeckel’s (2016) sense-and-respond enterprise and Wiener (1948) and Von Bertalanffy’s (1968) cybernetics system theory (refer section 6.1). Discussion of the Research Findings 243 Additional Thoughts and Perspectives The discussion in this sub-section reflects on an overarching insight that emerged from the research. This is the knowledge that an AE is an adaptive system that can transform through decision making supported by the key components of SOPI and behavioural flows. Decision making (DM) is fundamental to an AE because it is timely, informed DM that enables an enterprise to adapt. There is DM at every point of the Adaptive Enterprise Transformation Cycle. In addition, it is DM supported by behavioural flows consisting of information, learning and control flows, which allows the organisation to behave as a sense-and-respond enterprise (Haeckel, 2016). Specifically, information flows capture data and information and, once captured, it is interpreted so its meaning is understood. Facilitated by the learning flows, learning then takes place. This learning leads to decisions being made and acted upon. The decisions are then encapsulated in the control that flows throughout the enterprise. The presence of blended behavioural flows is important for an AE, but it is more important that the flows operate in a way that enables the enterprise to have agility when adapting. Enterprise agility depends on “real-time… real-value” DM (Hackathorn, 2004 p.4) that comes about by reducing three latencies, as in time taken: data latency, analysis latency, and decision latency. Hackathorn (2004) argues that reducing the time it takes (latency) to perform the activities of data capture, analysis of the data, and then making a decision(s), has a positive causal effect on enterprise agility and overall effectiveness. Through the findings of the research, proposed constructs and their dimensions were reframed and redefined which led to the discovery of the SILR and DADL constructs and several other important insights that illuminated different perspectives from those informed by the Delphi study. The insights that emerged are supported by the seminal theories that informed this research. Examination of these insights reveal that the SILR and DADL constructs support the premise that an AE is a sense and respond enterprise where DM occurs at every level of the organisational cybernetic system (Wiener, 1948; Anthony, 1965; Von Bertalanffy, 1968, Haeckel, 1995, 2004, 2016;). This cybernetic system is informed by the behavioural flows. These blended information, learning, and control flows, and the presence of the organisational Discussion of the Research Findings 244 systems structure, allows the enterprise to operationalise the sense and respond enterprise model (Haeckel, 2016) that is supported by timely and effective DM through the reduction of the three decision latencies (Hackathorn, 2004). The refined SILR and DADL constructs not only illuminated the crucial role of DM, but the measured dimensions of the constructs encapsulated Simon’s (1960) definitive Intelligence Design Choice (IDC) DM model (Langley et al., 1995). IDC is a three phase sequence, and each of those phases is represented by the three behavioural flows. Specifically, the intelligence phase of the IDC model exemplifies the information flows, the design phase demonstrates the learning flows, and the choice phase is denoted by the control flows. This, and the other discussed theories suggest that the results from the explanatory phase, where several important insights were highlighted, validate the concepts and models proposed in this research. Summary This chapter discussed the research processes and research results, with particular attention being paid to the development of research artefacts. The early sections of the chapter focused on key findings from the Delphi study. The five AE models, including the enablers, disablers, and characteristics that were created, validated, and refined using the Delphi method, were described in sections 6.1 to 6.7. The latter parts of the chapter focused on findings from the survey. Sections 6.8 to 6.10 discuss the hypothesis tests and the test results that were used to further refine and validate the AE concepts and models. In section 6.11, the findings from the explanatory phase of the research including additional thoughts and perspectives about AE are summarised. The following chapter presents the conclusion to the overall study about AE and the thesis as a whole. Conclusion 245 7 Conclusion “Without order nothing can exist – without chaos nothing can evolve." Oscar Wilde This chapter provides an overview of the research study on AE including the resultant key findings and artefacts. The structure of the chapter is outlined in Figure 7.1 and is as follows: In section 7.1 a brief overview and discussion of the research background is provided with an explanation of the current situation that includes the research lacuna, research questions, and ensuing research objectives. In section 7.2 the key research artefacts, created to fulfil the research objectives, are described and then the following section outlines the fulfilled research requirements. The contributions of the study, and its implications for academia and industry, are presented in section 7.4. Finally, in sections 7.5 and 7.6 the limitations of the research and possible future research directions that could be of interest to both academics and practitioners are presented. Figure 7.1: Structure of the Conclusion Chapter Conclusion 246 Overview of the Research Due to the gap in available literature, this research was motivated by the urgent need to advance understanding of AE from a practical and theoretical perspective. The aim of the study was to address gaps in the knowledge about AE by identifying the key components, structures, behaviours, enablers, disablers, and characteristics of an AE so that concepts and models could be developed in order to inform and guide so that an enterprise is able to adapt. A synopsis of the research background, including the research questions and research objectives, is included in the next sub-section. Enterprises wanting to survive in today’s volatile markets need to be able to adapt quickly to their rapidly changing environments. More recently, there has been an exponentially increasing rate of change, with a corresponding increase in the failure of enterprises, which has highlighted the need for an enterprise to be able to adapt. Therefore, it was identified that there was urgent need for coherent exploration, explanation and understanding of AE and how enterprises can transform in order to not only survive but to prosper. The unpredictability of today’s business environment is a catalyst for change in an enterprise as it tries to respond to the constant emergence of new problems and opportunities. An enterprise needs to alter its existing practices quickly and meaningfully which requires a change to business models as well as the BP that support those models. A way to view this change is that it impacts strategic direction and so requires changes to processes and organisational structures and, as a consequence, affects the IS that support those strategy, processes, and structures. Hence, an enterprise needs to be flexible and have the ability to adapt, almost in real time, by altering routines and practices to be able to respond to rapidly changing conditions. Most enterprises manage their strategy, organisation, processes and information/IS in either a deliberate or emergent way rather than adopting a cohesive, adaptive approach. The failure to engage an adaptive approach can result in serious problems if an organisation is unable to respond appropriately to changing environments. A review of the academic and industry literature relating to strategy, organisational structure, BP and IS, revealed that some aspects of deliberate, emergent and adaptive approaches are well researched in some of the research domains while other aspects are not well developed in Conclusion 247 any domains (refer Chapter 2). Overall, there appears to be a paucity of coherent theory to guide and support an enterprise in its AE evolution. This research lacuna, identified after a review of the literature, is depicted in Figure 7.2. The scope of the research lacuna revealed a range of issues from lack of strategic, procedural, and technical support, as well as conceptual and operational ambiguity and, as a consequence, motivated the two overarching research questions: What is an Adaptive Enterprise? and How does an enterprise adapt? Figure 7.2: The Research Lacuna Accordingly, the research questions helped to define two primary research objectives. These objectives were to conduct exploratory research in an attempt to discover insights about the phenomenon of an AE, followed by explanatory research to validate those insights. To realise the overarching objectives, the research objectives were delineated into six detailed objectives. They are outlined in section 3.1 and can be summarised as: to identify and develop concepts of an AE; to identify the key components of an AE; to identify, design and develop structural, Conclusion 248 behavioural, and transformational concepts and models of an AE; and to validate the developed AE concepts and models. A multi-methodological approach was used to guide the research (refer section 3.3.2) which was adapted for this study. The selected approach has been the corner stone of previous research in disciplines such as management, IS, and engineering. Essentially, it consists of three interconnected phases: observation, building theory and validating the theory through appropriate methodologies, then observing the validated theories and refining them. Application of this multi-methodological approach resulted in the creation of several research artefacts that, to varying degrees, addressed each of the research objectives. Multiple iterations of the literature review and the Delphi study, enabled AE factors and their inter-relationships to be identified and these factors and their relationships were synthesised to create proposed concepts and models. The concepts and models were then evaluated and validated by a panel of experts. Next, hypothesis testing of several of the Delphi outputs permitted further refinement and validation through the use of empirical data. This meant that research artefacts were validated through expert evaluation, peer review, hypothesis testing, and dissemination that ultimately addressed the research problems and fulfilled the research objectives. The research artefacts not only provide strategic, procedural, and operational support for the transformation of an AE, they also provide a framework for the design and implementation of adaptive IS. In the following sections the five key research artefacts are summarised. Key Research Artefacts Five conceptual models that help to describe the components, structures, behaviours, and processes of an AE were created, proposed, refined and validated through the Delphi study and online survey. The five models are: the Key Components Model, the Structural Elements Model, the Behavioural Flows Model, the Zone Model, and the Adaptive Enterprise Transformation Cycle, which include the Enablers, Disablers, and Characteristics of an AE. The first overarching research question “What is an Adaptive Enterprise?” was mostly answered by the first three models (Key Components Model, Structural Elements Model, Behavioural Flows Model) , while the second overarching research question of “How does an enterprise adapt?” was mainly answered by the Zone Model and the Adaptive Enterprise Transformation Cycle. The details of these research artefacts, and the process of their Conclusion 249 development, are presented in Chapters 4, 5 and 6. An overview of the research artefacts’ significant features is provided in the following sub-sections. Key Components Model The first model to be created was the Key Components Model which was initially constructed from the literature. It was subsequently evaluated, refined, and validated through the Delphi study and the survey that followed. This model was foundational to the research and was explicitly represented in the other AE models. In this model the four key components of an AE are identified as Strategy, Organisation, Process, and Information (SOPI) with the dimensions of each component being defined by the AE factors that emerged from the first round of the Delphi study. Each of the components was described as the approach to planning, developing, and managing that particular component. An example of this is planning, developing, and then the managing of organisational aspects of the enterprise such as leadership, culture, people, and organisational structure. Similarly, Information was described as the approach to planning, developing, and managing the data, information, knowledge, and systems of the enterprise. As mentioned, the Key Components Model was the first in the series of AE models that were built incrementally and it became the cornerstone of the other four models that will now be described. Structural Elements Model The Structural Elements Model incorporates theory and concepts from the literature as well as insights from the Delphi. The purpose of the model is to depict the important structures of an AE and so drew from a number of seminal management frameworks and includes the key components of an AE. It illustrates the strategic, tactical, and operational levels of an organisation and the notion that each organisational level is a microcosm that has interwoven key components of Strategy, Organisation, Process, and Information (SOPI). The overall proposition of the model is that organisational levels with their key components (SOPI) form the Structural Elements of an AE. This proposition was evaluated and validated by the experts on the Delphi panel through reaching consensus in their agreement with the proposed model. This unique combination of organisational level and SOPI components as a microcosm is an important contribution to the existing body of knowledge and extends the literature on AE Conclusion 250 Behavioural Flows Model Building on the Structural Elements Model, the Behavioural Flows Model illustrates three principle flows of an AE, which are information flows, learning flows, and control flows. In addition, the model shows that these flows have both a top down (deliberate) and bottom up (emergent) behaviour which is interwoven at both the macro and micro level of the enterprise. The Behavioural Flows Model demonstrates the way an AE could adapt and achieve its vision. It conceptualises an AE as an adaptive system that senses, interprets, and responds to its environments through decision making that is facilitated by the enterprise’s structures and behaviours, which enable it to be adaptive. The Behavioural Flows Model was initially evaluated and validated by the Delphi expert panel through consensus in terms of agreement with the concepts that it represents. The experts’ evaluation also identified the importance of enterprise context and illuminated the notion of an AE as an adaptive, cybernetic system that continually transforms and renews. The model was further validated and refined through hypothesis testing facilitated by the AE survey and the use of statistical factor analysis (refer section 5.5) that confirms the presence of these interdependencies, albeit in a refined form. The results of the hypothesis testing verified that the three interwoven behavioural flows of an AE enable an enterprise to be adaptive and, thus, makes a valuable contribution to existing literature about AE. The concepts illustrated in the Behavioural Flows Model and, in particular, the notion of a deliberate/emergent/adaptive approach is synthesised further in the following Adaptive Zone Model. Adaptive Zone Model The Adaptive Zone Model suggests that in order to be truly adaptive an AE needs to be balanced, with regard to the deliberate-emergent approach, on the key components of Strategy, Organisation, Process, and Information (SOPI). The model conceptualises this balanced adaptive approach as being an adaptive zone. The model proposition is that an adaptive zone exists, albeit in a context relevant to the enterprise, toward which the enterprise is propelled as it transforms, and then resides in as an AE. The Adaptive Zone Model was grounded in the seminal literature that advocates organisations ‘compete on the edge’ where the key performance driver is the ability to continually change over time (Eisenhardt et al., 1998; Conclusion 251 Scheer, 2007). The Adaptive Zone Model was first evaluated and validated through the Delphi study. The study also yielded data in terms of expert insights that contributed to the model’s further refinement. As with the other AE models, the expert panel emphasised the critical dimension of context, which implies that the adaptive zone is context dependent and contingent on the enterprise’s operating environments. The model was further refined and validated using hypothesis testing facilitated by the AE survey and the statistical factor analysis that followed (refer section 5.5). The results of the Delphi, and the hypothesis testing, suggests that an adaptive zone exists and within this zone each of the SOPI components has a unique position on its associated deliberate/emergent continuum. To date, this has not been discussed in the literature so it provides insight about an AE and, therefore, makes an important contribution to the body of knowledge and overall understanding about AE. Adaptive Enterprise Transformation Cycle The Adaptive Enterprise Transformation Cycle amalgamates the four aforementioned AE models with the Enablers, Disablers, and Characteristics of an AE that then creates a cohesive transformation cycle. This cycle forms a dynamic model and proposes a cyclical approach that facilitates an enterprise in transforming into an AE. It also illustrates the interdependencies among the various model elements and the inference that, for most, their positive correlation leads to greater levels of enterprise adaptiveness. It proposes a structured, iterative approach by which enablers help an enterprise to transform vision into action through the interweaving of the deliberate and emergent, while disablers hinder an enterprise from achieving this. As an enterprise becomes more adaptive it begins to exhibit characteristics that allow it to progress to, and remain in, the adaptive zone as an AE. The presence of the processes, structures, and relationships that enable this transformation were validated by expert evaluation. The experts also agreed with the structures and relationships illustrated in the model and that the proposed transformation cycle could enable an enterprise to be adaptive. Furthermore, the expert evaluations elucidated about the importance of specific prerequisites, structures and methods for an enterprise to evolve and transform to an AE. Conclusion 252 Each of the five aforementioned models, as being key research artefacts, is instrumental in addressing the knowledge gaps outlined in section 7.1. However, a more detailed account of how they achieve this is explained in the next section. Fulfilled Research Requirements To address the practical issues and research problems, identified through the literature review, the objective of this research is to improve understanding about AE. Achievement of this objective required the development and validation of holistic systems oriented insights, concepts, and models. The research objective and requirements were fulfilled by: 1. Addressing the lack of a coherent understanding of AE components, structures, processes, and practices. The identification and description of the four key components, together with their importance ratings, was the first step in developing a coherent understanding of AE. Each of the AE components, defined in terms of factors, was identified in the literature and subsequently validated through the Delphi study. The use of factors facilitated the clarification, measurement, and empirical testing of the various dimensions of each component. By using a process of identifying, evaluating, and validating each of the AE factors, and their relationships, allowed important insights to emerge. These insights, together with the factors and their relationships, enabled AE models to be created. The models conceptualise and explain the structures, processes, and practices that were previously lacking in the literature. This has meant that several research problems and practical issues, primarily related to the predominant narrow, silo based, unidimensional perspectives that seemed to exist, have now been answered. 2. Addressing the lack of a coherent understanding of adaptive approaches to the management and control of AE behaviours. The concepts, models and hypotheses, derived from the research, were based on the foundational concepts of key components, structures, behaviours, and an adaptive approach. These concepts allow an enterprise to translate vision into adaptive actions, and this translation is conceptualised in the AE models. Specifically, the models illustrate how to manage the SOPI key components in an interwoven and adaptive way through Conclusion 253 utilising the three behavioural flows of information, learning, and control. The behavioural flows model makes explicit the interdependencies among the components, structures, behaviours, and the adaptive approach. More importantly, the behavioural flows model demonstrates the operationalisation of Haeckel‘s (1995) sense and respond’ enterprise model. The model conceptualises and addresses issues associated with the implementation, management, and control of behaviours, which includes their function as sense, interpret and respond mechanisms (Gothelt & Seiden, 2017). Thus, it provides insights into AE. 3. Addressing the lack of a coherent understanding of a systems perspective including the enabling factors, structures, and the processes that support the transformation of an AE. The insights that emerged from the research can act as a guide for academics and practitioners in the development of executable practices on how to monitor, analyse, and control the transformation of an enterprise to an AE. The transition to an AE is best performed through the structured, cyclical implementation and development of adaptive components, structures and behaviours. The AE Transformation Cycle demonstrates multi-disciplinary best practice for the seamless translation of adaptive strategies into adaptive organisational structures and processes with supporting adaptive IS. It provides an understanding about the approaches, factors, mechanisms, and interactions involved in AE transformation. In addition, the transformation cycle has the potential to act as a platform for the development of inter-disciplinary discussion and research on the key elements, processes and practices from a systems perspective. The research artefacts were disseminated and evaluated by key stakeholders throughout the research process. Groups of international academics, senior academics, domain experts, industry practitioners, and business people evaluated the research artefacts at the various stages of their development, which provided useful feedback that was used to inform further development of the artefacts and ultimately, their validation. The research artefacts of concepts, models, hypotheses and methodologies, which addressed the overarching research problems and fulfilled the research objectives in several ways, are presented in Table 7.1 and Table 7.2. Conclusion 254 Overarching Research Problems & Research Objectives (Sections 1.3, 1.4, 2.7, 3.2) Research Method (Sections 3.3, 3.4) Research Dissemination (Section 7.4.6) Research Artefacts Address a lack of coherent understanding of AE components, structures, processes and practices by: • Identifying and developing concepts and key components of an AE • Identify, design and develop structural concepts and models of an AE. Exploratory Phase • Literature review: Observation, Theory Building, Validation • Delphi study: Observation, • Theory Building, Validation • PACIS (Peko & Sundaram, 2010). • IADIS Int. Conf. Collaborative Technologies (Peko et al. 2010). • IADIS Int. Conf. on Intelligent Systems and Agents (Peko et al. 2010). • ViNOrg (Peko et al. 2011). • Figure 6.3: The Key Components Model • Figure 6.4: The Structural Elements Model Address a lack of coherent understanding of adaptive approaches to the management and control of AE behaviours by: • Identifying, designing and developing behavioural concepts and models of an AE. Exploratory Phase • Literature review: Observation, • Theory Building, Validation • Delphi study: Observation • Theory Building, Validation • ICECES (Peko et al. 2011). • ICDS (Peko et al. 2012). • Appendix A, B, and C: Delphi Study Instruments • Figure 6.5: The Behavioural Flows Model • Table 6.4 - Table 6.7: Importance Ranking of AE Enablers • Table 6.8: Importance Ranking of AE Disablers • Table 6.9: Importance Ranking of AE Characteristics Table 7.1: Structural, Behavioural, and Methodological Research Artefacts Conclusion 255 Overarching Research Problems & Research Objectives (Sections 1.3, 1.4, 2.7, 3.2) Research Method (Sections 3.3, 3.4) Research Dissemination s(Section 7.4.6) Research Artefacts Address a lack of coherent understanding of a systems perspective, including the enabling factors, structures and processes, which support the transformation of an AE by: • Identifying, designing and developing transformational concepts and models of an AE. Exploratory Phase • Literature review: Observation, • Theory Building, Validation • Delphi study: Observation, Theory Building, Validation • ICCASA (Peko et al. 2012). • GraSPP (Peko & Sundaram 2013) • ICCASA (Peko et al. 2014). • Figure 6.6: The Adaptive Zone Model • Figure 6.7: The Adaptive Enterprise Transformation Cycle • Validating the developed concepts and models of an AE Exploratory Phase • Literature review: Theory Building, • Delphi study: Theory Building Explanatory Phase: • Literature Review: Validation, • Observation, Theory Building • Survey: Validation, Observation, • Theory Building • Table 5.1: AE Hypotheses • Appendix D:Survey Instrument • Figure 5.5: Hypothesis Models of the Adaptive Zone & Construct Measures • Figure 5.6: Hypothesis Models of the Behavioural Flows & Construct Measures • Figure 5.12: The a priori CFA AZM Model • Figure 5.13: The a priori CFA BFM Model • Figure 5.15: SEM AZM Model • Figure 5.16: SEM AZM Model Partially Mediated • Figure 5.17: SEM BFM Model Table 7.2: Transformational Concepts, Models, and Methodological Research Artefacts Conclusion 256 Research Contributions The contributions of this research are numerous with the primary contribution being the advancement of AE theory and practice through the creation of artefacts that conceptualise and describe the important components, structures, and behaviours of an AE. The research outcomes support the practical application and extension of AE theory and provide a platform for future research to be conducted across several disciplines. In particular, this research interweaves strategic management, general management, OM, and IS and is applicable to each of these disciplines. In the following sub-sections, the main contributions of the research are explained, including how the research findings and artefacts can be of value to researchers and practitioners in academia and industry to guide their efforts to transform enterprises into AE. Academic Research Contributions The research makes two main contributions to academic theory by providing a better understanding of AE as well as adding value and so extends the current body of knowledge. The first contribution is to several research domains through the advancement of AE theory. The second contribution is the research findings and AE artefacts that can be used by researchers in both academia and industry, as well as providing support for further research in what has been a rather under researched area. This research is multi-disciplinary as it includes several research domains. It is primarily grounded in Strategic Management, Organisational Management, OM, and IS and, as such, makes a contribution to each of the disciplines by extending the theory on which this research is based. It also advances general understanding about AE from each of the domain perspectives. In particular, the Strategic Management, Organisational Management, OM, and IS perspectives are explicated in the foundational concept of SOPI which forms the key components of an AE. These key components of SOPI are reflected in each of the AE models. In regard to specific contributions to various research domains, the Structural Elements Model contributes to management and systems theory by delineating an enterprise cybernetic system through the suggestion that the combination of dimensions and interdependencies of the SOPI components can be unique to each organisational level. The Behavioural Flows Model contributes to IS theory, as well as organizational, and strategic management, through the Conclusion 257 concepts of the blended behavioural flows of information, learning, and control, and the interweaving of these flows. Another academic contribution is to management and OM through the concepts illustrated by the Zone model, where several seminal concepts and models are seamlessly integrated and extended to create a research artefact that addresses the requirements of an AE. In particular, the Zone model extends the seminal works of Mintzberg and Waters (1985) on ‘Strategies, Deliberate and Emergent’, Eisenhardt and Brown’s (1998) ‘Competing on the Edge’ and Scheer’s (2007) concept of the ‘Edge of Chaos’. As well as the aforementioned contributions, this research is at the nexus of academic and practitioner research and, therefore, makes an important contribution to both academic and industry researchers who are involved in research into enterprise development and AE. This work offers an extended, inter-disciplinary understanding of AE, which includes research artefacts that can guide and support the transformation of an enterprise into an AE. These outcomes should be of interest and value to the different groups involved in research in this under researched area. Furthermore, the research artefacts can underpin both theoretical and practical investigations and are therefore able to be used for different methods of inquiry such as experiments, ethnographic studies, and action research, to mention just a few. The following sub-sections focus on the contributions of this study to industry and AE practice. Enterprise Designers and Management Consultants A primary contribution of this research is that it can be instrumental in aiding an organisation to develop and ultimately transform into an AE. The architecture of an AE is a multidisciplinary endeavor involving both the internal and external stakeholders of the organisation. It is not uncommon for enterprise architects, whether they are internal or external, to have a myopic, domain centric view of enterprise design. This limited view has, to some extent, been perpetuated because of the lack of a multi-disciplinary perspective about enterprise design and subsequent transformation into an AE. However, this research has drawn on several research domains and so is multi-disciplinary in nature. The research supports the design and development of an AE by offering a macro, multi-disciplinary perspective which individuals engaged in AE development can adopt. The research and subsequent findings suggest that AE designers should endeavour to be generalists, rather than single domain experts, and offer a Conclusion 258 view that supports a holistic design approach to AE. Furthermore, when trying to effect AE development the research findings and artefacts can be used as exemplars and instruments of change to overcome the challenges and internal barriers that have been cultivated by siloed organisations. Enterprise Decision Makers A key finding of the research is that AE development and transformation has a strategy driven orientation and is, as such, primarily directed by senior decision makers in the organization, such as CEOs, senior management and other influential stakeholders. To inform and guide these enterprise decision makers the research provides conceptual, procedural, strategic, and operational support through provision of AE artefacts. In particular, the AE concepts and models support enterprise decision makers in several critical ways. First, they conceptualise an AE from several perspectives and so not only inform but allow decision makers the freedom to translate these concepts into knowledge about AE that is relevant to them and the context of the enterprise. Second, at a more concrete level, the concepts and models explicitly identify components, structures, and behaviors that are requisites for an AE and also precursors for developing an enterprise into an AE. In addition, the more detailed AE factors, enablers, disablers, and characteristics, with their evaluations of importance, support the identification and prioritising of strategic initiatives, and their implementation, to enable development of the requisite AE components, structures, and behaviors. Enterprise Entities While this research supports the transformation of an enterprise in several ways, more importantly it demonstrates how, through the application of the Adaptive Enterprise Transformation Cycle, an enterprise can transform to become an AE. The transformation cycle illustrates the elements, relationships, and processes through which an enterprise can leverage the adaptive approach using adaptive components, structures, and behaviors to build AE characteristics and, by doing so, develop into an AE. As well as providing a process design example to facilitate this transformation, the research also provides prescriptive support through the identifying, ranking, and describing of AE factors, enablers, disablers, and characteristics. By illustrating and describing the transformation elements and processes the Conclusion 259 cycle offers support to an organisation as a whole, and to each of the organisational levels, functions, employee groups, and individuals within the organisation. The cycle also demonstrates the aforementioned conceptual, strategic, procedural, and operational support provided by this research. Information and Communication Technology Vendors A key requirement and component of an AE is a requisite IS. This requirement for efficient and effective communication, information gathering, dissemination, and knowledge management are critical to an AE, as well as having the necessary IS infrastructure to support them. Over the years there has been a plethora of literature about the failure of IS to support an organisation due to the non-alignment of IS and its stakeholders with other stakeholders of the organisation. Several reasons have been suggested for this no-alignment, with one reason being the dearth of descriptive and prescriptive models that could be used to help bridge the gap between IS stakeholders and other stakeholders (Ullah & Lai, 2013). This research did not specifically investigate the design and implementation of IS but the Adaptive Enterprise Transformation Cycle offers a framework for the development and implementation of adaptive systems that support an AE. This model is integrated and dynamic and explicitly illustrates the AE transformation elements with their interrelationships as well as the transformation processes. Therefore, the model can act as a platform for IS development and can be useful as a high level AE systems design framework from which more detailed system models and system architectures can be derived. This will mean that it is particularly relevant to IS domain experts, developers, and IS vendors. Another important contribution of the model is that it should encourage a multi-domain view of AE systems in regard to design, implementation, and development, due to the model’s lack of domain centricity. This multi-domain view can help overcome difficulties when bridging the gap between IS, IS vendors, and business as previously highlighted in the literature. Dissemination and Evaluation The concepts and models were also evaluated and validated through publications and presentations. Disseminations of the research and the research outcomes are in Table 7.3 . Conclusion 260 Publication Presentation Peko, G., Dong, C., & Sundaram, D. (2014). Adaptive Sustainable Enterprises. Mobile Networks and Applications, 19(5), 608-617. ICCASA 2013 (International Conference on Context-Aware Systems and Applications). Peko, G., Dong, C., & Sundaram, D. (2013). Contextually Aware Adaptive Systems for Enterprise Transformation. In Phan, C., Nguyen, M., Nguyen, T, Suzuki J., (Eds.), Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (Vol. 109, pp. 51-61). Berlin: Springer-Verlag. ICCASA 2012 (International Conference on Context-Aware Systems and Applications). Dong, C., Peko, G., & Sundaram, D. (2013). An Implementation of Agent-enabled Distributive Decision Support Systems for Adaptive Business Networks. In R. Jain, B. Metri, & J. Gupta (Eds.), Operational Excellence: A Key for Performance Excellence (First Edition ed., pp. 259). New Delhi. ICDS 2012 (International Conference on Decision Sciences for Performance Excellence). Peko, G., Danenberg, J., & Sundaram, D. (2013). Adaptive Sustainable Enterprises: Interweaving the Deliberate and Emergent. The International Journal of Sustainability Policy and Practice, 8(4), 15-27. ICECES 2011 (International Conference on Environmental, Cultural, Economic and Social Sustainability). Peko, G., & Sundaram, D. (2013) Sustainable Adaptive Enterprises: Interweaving the Deliberate and Emergent. Presentation audio recording and PP slides. GraSPP 2013 (University of Tokyo Graduate School of Public Policy) Peko, G., Dong, J., Rohde, M., & Sundaram, D. (2011). Cloud-based Contextually Aware Adaptive Systems for Enterprise Transformation. In Punik, G., Cruz-Cunha, M., (Eds.), Virtual and Networked Organizations, Emergent Technologies and Tools (pp. 1- 12). Berlin: Springer-Verlag. ViNOrg 2011 (International Conference on Virtual and Networked Organizations). Peko, G., Rohde, M., Dong, J., & Sundaram, D. (2010). ContextAware Mediated Learning System For Organisational Transformation. In Bessis, N., Kommers, P., Isaías, P., (Eds.), Proc. of the IADIS Int. Conf. Collaborative Technologies 2010, Part of the MCCSIS 2010 (pp. 85-92) Freiburg, Germany. IADIS 2010 (International Association for Development of the Information Society). Peko, G., Dong, J., & Sundaram, D. (2010). Agent-enabled adaptive decision support systems. In Palma dos Reis, A., Abraham, A. (Eds.), Proceedings of the IADIS International Conference on Intelligent Systems and Agents 2010, Part of the MCCSIS 2010 (pp. 3-8). Freiberg, Germany. IADIS 2010 (International Association for Development of the Information Society). Peko, G., & Sundaram, D. (2010). Adaptive Enterprises: Interweaving People Process and Technology. In Proceedings of the 14th Pacific Asia Conference on Information Systems (pp. 985-996). Table 7.3: Presentations and Publications of the AE Research Conclusion 261 Research Limitations Academic rigour was paramount for this research study so unavoidable limitations should be acknowledged. The limitations include issues with regard to the scope of the research, the research artefacts, and the survey instrument, all of which will be briefly discussed. Research Scope It became apparent from the literature review (refer Chapter 2) that the research area of holistic AE has been under researched and that many opportunities existed for a holistic investigation of the AE phenomenon. However, parameters had to be set which limited the scope of the research and also the number and type of inquiries that could be conducted. Therefore, decisions were made as to which opportunities would be the most worthwhile and that would be achievable. The result was that particular choices underpinned the research objectives and meant that some aspects of AE could not be fully explored. In particular, the requirement for a coherent understanding of AE processes and practices (refer section 2.7.2) was addressed from a conceptual and descriptive perspective but was not able to also be addressed from a detailed prescriptive perspective. For example, although important dimensions and factors of AE were identified, explicated, refined, and verified, the development of a detailed best practice roadmap, was not feasible due to the limited scope of the research. This is unfortunate as a roadmap could be useful to practitioners as a prescriptive guide for the development of AE best practice. Practical Application of Research Artefacts The research artefacts developed and validated in this study are of a conceptual nature and advance both the theoretical and practical knowledge of AE. However, the artefacts need to be applied in a real world context as although empirical data was used for creating and validating the artefacts, actual application and user testing by AE practitioners could result in their further refinement and the building of theory. A practical application is particularly relevant for the Adaptive Enterprise Transformation Cycle as it is a dynamic model and could be extended by implementing it in a real world context and examined through a field study lens. For example, the elements of the Adaptive Conclusion 262 Enterprise Transformation Cycle and their relationships were evaluated by experts and deemed theoretically valid. But due to the scope of the research, the opportunity to dynamically demonstrate the causalities among the elements and the theorised transformation effect was not possible. AE Survey A limitation is acknowledged with regard to the AE survey as, although it was developed using a robust design process, the survey was USA centric. An important requirement for the sample was a target population that consisted of employees who work in a range of small to large enterprises in a variety of industries (refer section 5.3). As the survey was administered in the USA almost all the respondents (97.42%) worked for USA enterprises and so the sample of respondents was from a limited target population. If the target population had been extended to other countries then the results could have been different. As societal aspects, political influences, and cultural norms vary from country to country it would be interesting to evaluate the impact of these on the ability of enterprises to adapt. This might improve the generalisability of the findings, and the usefulness of the research artefacts, so they can be applied in different contexts and operating environments. Lack of Measurement Scales The explanation phase of the research study involved the use of measurement scales that had not been empirically tested, as none had been developed to measure the various dimensions of the hypothesised AE constructs. While there are scales for some aspects of these constructs, scales that measure adaptive and interwoven dimensions do not exist. Therefore, the scales that were used in this research are undeveloped and require further testing and verification so they have greater statistical rigour when used for explanatory research purposes. The lack of appropriate measurement scales meant that the factor analysis used to analyse the survey results had limitations. However, this did not prevent testing of the hypotheses to validate and explain the research artefacts since the factor analysis techniques were applied with this limitation in mind. The shortcomings of using undeveloped measurement scales were minimised through analysing and interpreting the survey results from both an exploratory and Conclusion 263 explanatory perspective. This enabled further refinement and validation of the AE concepts and models that were being tested. Future Research In regard to the AE phenomenon, there are numerous questions that remain unanswered and issues to be addressed. Multi-disciplinary AE research is currently underdeveloped, particularly from an IS perspective. This research, therefore, provides a solid foundation for future investigation about AE with several research opportunities. These research opportunities are associated with further development and validation of the research artefacts as well as the research instruments used for the study. They are outlined in the following sub-sections. Proposed Research Artefacts A rich area for future research exists with regard to extending the scope of the research findings and artefacts of this study. A series of five AE models are currently of a conceptual nature. These models provide a platform on which to build prescriptive tools and adaptive systems that can inform, support and guide the development of an AE. In particular, the Adaptive Enterprise Transformation Cycle is a dynamic model that has the potential to seed future research studies. The cycle could be simulated, enhanced, and validated through the design and development of a systems thinking model. A dynamic systems thinking model could then help enterprise researchers and designers to develop insights into AE system behaviour, which in turn could be used to support effective organisational plans and decisions to guide the development of an AE. Another area of possible future research, that can be based on these research findings and AE models, is to design and develop an IS that integrates as well as aligns strategy, organisation, process, and information in order to propose and implement a set of concepts and management tools to assist in the development of an AE. While a comprehensive design and architecture of an adaptive IS was not within the scope of this study, the research results offer the opportunity for the development and structured implementation of a relevant adaptive enterprise IS. Conclusion 264 AE Empirical Research and Measurement Scales Several hypotheses were formulated and tested to execute the explanatory phase of the research. The refined AE models that emanated from the hypothesis testing led to the emergence of an additional set of potential hypotheses associated with the models. As a consequence, hypothesis tests are required to validate and advance these refined models. In addition, the explanatory phase of the research used a statistical survey instrument to test three of the five proposed models and, in doing so, also tested some elements of the remaining two models. Future research could expand the scope of the survey instrument to validate aspects previously tested, and test most aspects not tested in two of the models. As the Adaptive Enterprise Transformation Cycle is a dynamic model, it could be tested and validated using more appropriate research methods, such as those suggested in section 7.5.2 rather than just using a static research instrument such as a survey. The current research may have benefited from the use of other research methods and other populations. Future researchers could use of a variety of research methods such as interviews, focus groups, experiments, and simulation and involve academics and industry practitioners from a range of culturally diverse countries. This should give greater insight about the AE phenomenon and a more coherent understanding of AE from a variety of perspectives. Finally, the lack of appropriate scales to empirically test the adaptive constructs opens up an area for future research. The development of scales for the measurement of the AE constructs, namely adaptive strategy, adaptive organisation, adaptive process, and adaptive information, is important. It is suggested that measurement scales that will enable the empirical testing of the adaptive constructs, from both a multi-disciplinary and holistic perspective, need to be developed and validated. Concluding Remarks Who knew that Oscar Wilde’s words all those years ago would be prophetic in regards to an AE – that ordered chaos is paramount to be a true AE. The constant and, at times, increasing rate of change in technology, global economies, and societal makeup puts organisations under pressure to adapt in order to thrive, or even survive. Conclusion 265 The issue is how it should adapt as, until now, there has not been a blueprint for organisations to follow, with the result that many of the changes made are piecemeal. Current literature does not provide an overview to guide the organisation in what should be changed and how it could be changed. The results of this study offer descriptive concepts, models, and hypotheses to aid in the identification of an individual organisation’s requirements so that decisions can be made to facilitate the organisation transforming to an AE. While not definitive, nonetheless, this work should be of interest to both AE academics and AE practitioners. Appendices 266 APPENDICES Appendix A Delphi Study Round One 267 Appendix A Delphi Study Round One Appendix A Delphi Study Round One 268 Appendix A1 Ethics Approval Appendix A Delphi Study Round One 269 Appendix A2 Participant Information Sheet Appendix A Delphi Study Round One 270 Appendix A Delphi Study Round One 271 Appendix A3 Delphi Round One Pilot Test 1 Instructions Together with my Ph.D. supervisor, Dr. David Sundaram, I am investigating adaptive enterprises (AE) using the Delphi method of inquiry followed by survey research. The multiround Delphi study has been designed mainly for exploratory purposes. It will be administered online with a longitudinal questionnaire to be amended at least twice and I have developed the preliminary first round questionnaire. Delphi Study Procedures The Delphi Study is an iterative process using several rounds of questionnaires to engage with a selected group of experts to generate insights about AE. The participants of the Delphi are all experts in the area of AE, albeit with different backgrounds, and will be grouped into panels representing their primary background. Round One is the initial questionnaire and explores the practices, processes, characteristics and elements of an AE. The findings of Round One will be thematically analysed and become the input to the second round, which involves ranking and rating, according to agreement, by the experts. Round two and the subsequent Delphi rounds will allow the experts to re-evaluate and refine the questionnaire items that did not reach a sufficient level of consensus in the preceding round. Round One Feedback We would greatly appreciate it if we could arrange a time to meet so you can complete Round One of the Delphi and provide face-to-faced feedback on any aspect of the questionnaire and particularly on the following: • Purpose: do you think the questionnaire will achieve the desired outcomes? • Form: are the questions well-articulated? • Understanding: are the instructions and explanations clear? • Ease of use: are there any problems using the questionnaire? • Are there any important details missing? • Is there anything that should be removed? • Is the time-to-complete estimate realistic? • Any other comments? For you information, we have included the Participant Information Sheet, which will be provided to the participants who are considered to be experts in the area of AE. Also here is the link to Round One of the Delphi study. We look forward to hearing from you within the next week, if possible, to arrange a suitable time to meet. Thank you Gabrielle Peko and Dr. David Sundaram Appendix A Delphi Study Round One 272 Delphi Round One Pilot Test 2 Instructions Together with my Ph.D. supervisor, Dr. David Sundaram, I am investigating adaptive enterprises (AE) using the Delphi method of inquiry followed by survey research. The multiround Delphi study has been designed mainly for exploratory purposes. It will be administered online with a longitudinal questionnaire to be amended at least twice and I have developed the preliminary first round questionnaire. Delphi Study Procedures The Delphi Study is an iterative process using several rounds of questionnaires to engage with a selected group of experts to generate insights about AE. The participants of the Delphi are all experts in the area of AE, albeit with different backgrounds, and will be grouped into panels representing their primary background. Round One is the initial questionnaire and explores the practices, processes, characteristics and elements of an AE. The findings of Round One will be thematically analysed and become the input to the second round, which involves ranking and rating, according to agreement, by the experts. Round two and the subsequent Delphi rounds will allow the experts to re-evaluate and refine the questionnaire items that did not reach a sufficient level of consensus in the preceding round. Round One Feedback This is the second pilot test for Round One. After considering all the feedback from the first test, the questionnaire was revised accordingly. We would greatly appreciate it if we could arrange a time to meet so you can complete the Round One of the Delphi and provide faceto-faced feedback on any aspect of the revised questionnaire and in particular on the following: • Purpose: do you think the questionnaire will achieve the desired outcomes? • Form: are the questions well-articulated? • Understanding: are the instructions and explanations clear? • Ease of use: are there any problems using the questionnaire? • Are there any important details missing? • Is there anything that should be removed? • Any suggestions for improvement to the email invitation to participants (see below). • Any other comments? For you information, we have included the Participant Information Sheet, which will be provided to the participants who are considered to be experts in the area of AE. Also here is the link to Round One of the Delphi study. We look forward to hearing from you within the next two weeks to arrange a suitable time to meet. Thank you Gabrielle Peko and David Sundaram Appendix A Delphi Study Round One 273 Appendix A4 Delphi Round One Invitation to Participate Dear Participant Thank you for kindly agreeing to participate in my research on Adaptive Enterprises (AE). I appreciate your support. This research seeks to define and model AE through a multi-method approach encompassing opinions from academics and industry experts. You have been identified as an expert in the field under investigation and we are very interested in your opinions and insights. You can access the survey by clicking on the following link: https://www.surveymonkey.com/s/Adaptive-Enterprises The questionnaire will take approximately 30 minutes to complete. Participation is voluntary and you may decline to take part without giving a reason. All your responses are completely confidential. If you do have any question about this survey, please email me directly at g.peko@auckland.ac.nz Many thanks in advance for your help. Kind regards, Gabrielle PS please find attached the Participant Information. Gabrielle Peko ISOM Department The University of Auckland Owen G Glen Building 12 Grafton Road Private Bag 92019 Auckland Appendix A Delphi Study Round One 274 Appendix A5 Round One Questionnaire Appendix A Delphi Study Round One 275 Appendix A Delphi Study Round One 276 Appendix A Delphi Study Round One 277 Appendix A Delphi Study Round One 278 Appendix A Delphi Study Round One 279 Appendix A6 Raw Responses Question 2: In YOUR OPINION, how does an enterprise adapt to its environment? Response Count: 30 Number Response Text E1 A good one doesn’t adapt. It plans and executes that plan before the environment changes. E2 By panicking and laying off employees, not paying bills, adopting unethical procedures and so forth. E3 Change strategies and goals Change business processes Change technologies Change products and services E4 An enterprise can adapt to its environment through either a proactive or a reactive approach. A more proactive approach is one that usually an enterprise takes the lead role in innovation and practices for example and seek to anticipate its environment changes in the medium and long term through planning and constructing systems that fir with the future environment. A more reactive approach is one that is more or less enforced upon the enterprise. In this case, the system is often slow to respond and often some parts of the system does not respond well. Any adjustments, such as HR, operations and marketing often come at the cost. E5 An enterprise firstly feels pressure from a changing environment. Some things that used to fit in the old environment do not fit any more. There are different demands on the organisation and personnel find that they are undertaking tasks that they did not have before or doing more of something and less of other tasks. The organisation may find that they do not have the requisite skills or technology to respond to the changing demand, which will result in in increased pressure. Growth slows and little progress is made while the existing structure tries to cope with changing workload. Pressure builds up until the organisation has to adapt, by restructuring, and rationalising staff and equipment, recruiting skillsets in the areas under demand and reducing skillsets in those areas with reduced demand or taking on new technology. Existing staff need to retrain and adapt to new or altered roles. The process of restructuring may initially result in retrenching and halt in progress, but as staff become used to their new roles, Progress and growth becomes rapid. E6 For a company to have an ability to adapt to its environment effectively, it will need to take a number of key steps; 1. Competitive information (pricing, products, services, solutions, business models etc… this is depended on the type of enterprise as to which of these examples are applicable) 2. Business outcome requirements (Vision, values, business plan, budgets, stakeholder or shareholder expectations) 3. Customer base survey (ability to benchmark current performance with expectation) 4. People alignment – this is best described as implementing the vision, values, business plan through measureable key objectives which alfin the business from senior management to shop floor employee’s. For this to work effectively, the enterprise needs to have an agreed shared vision with tangible outcomes 5. Continuous reviewing life cycle. This is when a business continually reflects on its overall performance and adjust according with the environment. Failing to do this will ultimately cause the business to be left behind the times and fail. E7 An enterprise aligns with the current economic climate. This is a change driven from the top management to be competitive in a certain industry. Organisations adapt quickly perform well under given circumstances. In order to be successful organisations should have the knowledge and idea of up-coming environmental changes. Those organisations should plan for it in advance. Management commitment is necessary. Also, knowledge of Macro and Micro economic changes are very important. These external and internal changes should be communicated to staff in advance. Commitment and buy in of all staff members is important. Staff motivation and skills should be at a high level to combat these changes and adapt to these changes very quickly. More importantly senior management should be very sensitive changes in the environment. They should be knowledgeable to analyse situations and provide solutions. Appendix A Delphi Study Round One 280 E8 An enterprise should adapt to its environment by (a) recognising at an early stage when change is needed (b) by considering in depth what changes need to be made and (C) what the different implications of these will be for customers and for staff (d) then analysing how these changes need to be made - who needs to be involved, who is critical to the success of the change etc. Leadership will be necessary to guide the company through the various stages to the new position. E9 By responding to cues from the market. For example, level of sales in the marketplace can stimulate the need for improvement in product and / or service, innovation, different marketing strategies. The entrance or activities of competitors in the marketplace can provide cues, as can regulation and /or societal demands. Adapting can be in the use of resources: new or better resources, or in better use of resources; better adherence to regulations, surpassing regulatory or societal expectations, and greater flexibility to meet individual customer (whether external or internal) needs. E10 Through the strategic management process with a systems perspective (systems that are a part of the enterprise and systems that the enterprise is part of). E11 By understanding their environment, that is regular feedback loops that provide information as to how and where they stand according to their competition. By having flexible and adaptive policies for each component of the business. By having flexible and scalable technology capability by capturing and fostering staff talent by having the leadership to realise opportunity. E12 An enterprise consists of multiple sub-entities (groups, individuals). Individuals are capable of perceiving changes in external environment. These issues or factors can then be raised in groups subsumed by the enterprise, creating a sensory method for the wider organisation. (Example - see Beer’s VSM?) When there is awareness, actions can follow. The enterprise is able to adapt through changes to resources or capabilities they coordinate, both internally, and those that they can influence (externally), through contractual arrangements and other relationships as we may find in a supply chain. Usually resources and capabilities will be reconfigured to provide an improved ‘match’ to requirements in the environment. E13 An enterprise has to be able to understand its environment in terms of customer demands, business challenges, competition, etc. Additionally an adaptive enterprise needs to be aware of its own situation, i.e. be able to assess one's own situation from a system's perspective taking into account e.g. capabilities, resources, targets, existing plans, etc. External and internal awareness can be improved through performance measurement and appropriate feedback between decision makers in the enterprise. E14 (1) Keeping informed on what other players in the industry are doing and evaluating if they need to follow or alternatively if they need to innovate in order to adapt. (2) Looking for innovative ways of doing business, trying to keep one step ahead if there are changes perceived to be developing. (3) Proactively explore new options or technology of doing business (4) Keep their staff proactively 'involved' in the adaptation, ensuring essential buy-in, and using their pool of knowledge. E15 It starts with deep customer insight, generated by sales and marketing working together with top management. The key issue related to business model change - the firm needs to understand a continuously monitor any changes needed in its business model. E16 If an enterprise is adaptive, it is able to quickly respond to changes in the environment because it has business processes that enable it to do that. It is able to adjust things like production volumes and service delivery based on immediate data about actual customer demand, and do so on short time scales. But most enterprises aren't adaptive, and so in order to adapt to their environments, they normally will need to actually change their business processes. E17 Traditionally the day-to-day decisions and long term strategies were left solely to senior management. To adapt successfully today businesses should look beyond just the organisation’s senior management, trusted strategies and stated goals. Given the higher education levels of even the lowest staff and the powerful tools available to each and every member of an organisation, a successfully adaptive enterprise should seek the opinions and trust their staff to a greater extent. The military style or 'old school management' that was so successful in the distant past was Appendix A Delphi Study Round One 281 replaced with a more friendly 'new school' style of management. Today, an adaptive enterprise should take a further step to become more open and trusting of their employees. E18 It adapts by being flexible in the way it conducts its core business. E19 It is constantly adapting while it does not always seem to be. It adapts under three conditions: 1. When the external environment becomes more hostile and creates a barrier mentality 2. When barriers no longer serve to protect it will transplant itself to a new environment 3. When transplantation is not feasible it will undergo internal structural changes in its processes and functions. E20 1. By having clear goals, objectives, focus 2. Monitoring internal and external environments 3 Establishing analytics to determine the organisation's success in meeting its objectives in the context of its environment 4 Having a committed staff who have accepted the organisation's objectives and through leadership adapt as required to the perceived need to change. E21 It changes it products, process and people when it is forced to by competitor or by government regulation or by technological changes or by innovative people within it. E22 By having a decentralised structure that allows the various functions to interact with the external environment is helpful. E23 Depends on the company's market position. For example, leaders are very active, involve, influence and many occasions control environmental factors, especially policy development, financial regulations, etc. Therefore, their adaptation processes are strategically planned and designed. On the other hand, followers' approaches are reactive and struggles to adapt steps changes. However, they are quite good at adapting incremental changes through realigning resources, processes and technologies. Multi-national /global companies often stiff due to their corporate policy in their country of origin and also widely varied externalities in different countries. These companies, on many cases, delegates to regional operations for adaptation but monitors from the corporate head officers. Change management strategies, risk management strategies, research and innovation, negotiation, ICT infrastructure and strength, and finally the strong management team are the key to this. E24 Enterprises in the market place usually react to financial stimulus e.g. not meeting sales targets, the need to restructure within the constraints of OPEX and headcount etc. In addition, restructuring is sometimes created by the need to get rid of people (a form of adaptation). In addition, an enterprise create new roles e.g. my role at Telstra was specially created. In many cases, an enterprise also adapts to its environment by getting their employees to do roles as they arise. Thus, there is macro adaptation of restructuring, meso level of the creation of roles and micro role adaptations mechanisms in place. On an extreme case, when there an imminent change of CEO, the company adapted to its environment by ensuring the correct leadership is in place. In this case, the board of directors chose a CEO that would continue in the style of his predecessor! Contrast this to Telstra, where the middle management was changed. In a typical large organisation, the main focus is where the money is allocated in funding CAPEX and other projects as well as roles, as this is where the activities will be occurring. In addition, large organisations like Telstra, BHP etc. would use mergers and acquisitions to acquire knowledge, competitors and new product lines. E.g. Rockwell Automation's acquisition of the programmable logic control proved right when the product line accounted for 60% of its revenue. On another scale was the merger of BHP and Billiton recently that created the world's largest resources company. E25 Flexibility in the structure of the organisation. If it is too bureaucratic it cannot change fast enough. If there is too many prescriptive policies and rules there is not the flexibility to even slightly adapt to changing needs. Internal communication about the environments in general has to filter down and up the organisation to make people aware of the need to change. E26 It changes its organisational design in the following fields to align with changing environments. * Business model * Supply chain networks * Process design * New competencies and capabilities * New cultures * New product/service design * New channels to market * New innovations supporting customer experience Appendix A Delphi Study Round One 282 E27 An enterprise must adapt to its environment, but at the macro level - obey statues and regulations - on what it produces (services or product) it is free to innovate and transform the market, e.g. Apple, GE. Some like Dell provide product and services in innovative ways creating new form of trading - changing the environment. E28 Focus on customers, profitability, and trends and new influences that will effect profitability. Adapt to meet the changing environment. E29 An enterprise adapts to its environment by trying to match and balance the supply and demand of the goods or services it offers in the market place. The enterprise will try to use its resources (Labour, IT, machinery, etc) optimally avoiding, if possible, wastage (time, money, materials) to achieve this. E30 Changes strategy as the market changes. Initially, changes have to be driven by top management in order to get the organisation as a whole to change. To make the right decisions management must have clear visibility about the organisation’s cost structures and their customer needs. The organisation uses in informed decision making based on accurate internal and external information. Also, the entire organisations needs to understand and focus on its main/high value customers. Question 3: What are the key CHARACTERISTICS of an adaptive enterprise? Response Count: 30 Number Response Text E1 It must be flexible and agile to be able to take advantage of changing environments and opportunities or to respond defensively to emerging threats. It therefore should build a culture that is entrepreneurial and accepting of change, proactively experimenting with new ideas and probing the environment research and trials. E2 Ability to downsize/upsize staff and locations from which they work. Flexible hours and conditions. Outsourcing/insourcing, short term and casual arrangements. Virtual organisations downstream and upstream. E3 Flexibility, Adaptability, Agility. E4 Through constant monitoring of the environment - environmental scanning. It needs to have good people in place in order to be able to react to potential adaptation required. The leaders in such enterprise should be constantly encouraging change for the better instead of 'business-as-usual' mentality. People within the organisation are happy to interact with people from other departments or external to the organisation. E5 Flexibility Forward thinking outward looking value staff initiative encouraging retraining encouraging knowledge transfer positive outlook to change empowering staff to make decisions being honest with staff about change quick learning willingness to take on change Trust between managers and staff. E6 The key characteristics of adaptive enterprise are; 1. Living the company vision 2. Becoming a ‘trend setter’ leading the way in current markets 3. Values and beliefs (this is from the shop floor to management) this will drive the ‘can do attitude’4. Reviewing performance and implementing action plans to ensure alignment of business outcome 5. Accountability and ownership. E7 Willingness for change Motivated Staff Knowledge of the industry/s that they operate in Technically aligned. E8 Leadership Good information Reliable data Good communication networks Flexibility. E9 Characteristics are the organisational culture and commitment of the human resources who possess relevant knowledge and skills for the organisation to be adaptive. E10 Purpose driven/strategically managed environment that empowers and meaningfully drives innovation. Appendix A Delphi Study Round One 283 E11 Flexible Open aware creative independent. E12 Innovative Willing to change Flexible ‘Open’ to perceive new ideas/concepts Awareness of external changes/shifts Able to perceive multiple future scenarios that are not obvious based on current business trajectory. Able to rapidly change skills or resources that they control. E13 Company culture that embraces change, adequate company resources, (unique) company knowledge/capabilities, internal and external visibility, strong SC relationships, innovative SC partners. E14 A Positive Attitude towards change throughout the company; ensuring their employees are not afraid of change but embracing change as a welcome challenge and as part of what is required to keep the business going. (2) Trust in Management - openness about weaknesses in the operation or of what needs to improve. No fear of highlighting inadequacies. (3) Passion and Pride for the enterprise, for what it is doing and achieving - rather than narrow personal goals of 'getting ahead', backstabbing etc (4) Encouraged staff that will think outside the square, and have managers in place that are confident and supportive to enable the new ideas to be explored (5) Staff happily take on board any changes, as they see the overall benefit of the changed way of operating. E15 The firm needs to have an 'outside-in' mindset. The firm needs to understand how the market works and how this can be influenced. The firm needs to position itself in the market network. E16 High productivity. Low waste - e.g. little overhead, minimal inventory. Empowered employees. Integrated - all parts of a business unit are chasing the same vision. High quality decision making. E17 To be able to react quickly and effectively to threats and opportunities To create new value from new circumstances while following the principles of the organisation itself To keep as many members of the organisation involved as possible in adapting. E18 Flexibility Innovative Enterprising. E19 Growth Market position Brand image. E20 Staff that are flexible, adaptive, focused, committed, responsible, self-starters, show initiative working in an organisation that recognises and rewards its staff who have these characteristics. E21 Flexibility, intelligence, a culture that values adaptively. E22 An adaptive enterprise is more likely to respond to changing customer demand, come up with new innovations, and survive through changing economic climate. An adaptive enterprise is likely to change where there is a need for change. E23 Strong and timely communication Well defined corporate and business strategies Strong leadership team Research and innovation Delegation and authority to adapt at various management levels Proactive and motivated employees Learning organisation. E24 From a practical consulting perspective, it is the ability to forecast the future and where they want to go then implement projects/program and mechanisms to get there. Westpac for example had a $220 million program of work to transform the experience of banking in the industry. As another example, BHP has done extensive work on the predicting Iron Ore demand and creating programs of work to increase its capacity to 220 million tons per annum by 2014. In addition, BHP created the 1SAP program to ensure it can get information for decision making quicker which leads to another lesson, of creating mechanisms that would bring information quicker for decision making. In this case, it was integrating two SAP landscapes. There was a lot of interest in Fletcher Challenge (FC) at senior levels in mid 1990s on learning organisations; a business unit also embarked on certain initiatives to develop a learning culture but as in many cases, developing a learning culture requires sustained leadership something in the long term that was a challenge as managers leave after being promoted. Related to the question above, on mergers and acquisitions, was the ability to forecast which business units were required to build core competencies and value chain control. E.g. FC integrated the construction industry's supply chain so did Fonterra on milk production. Note that the value chain control afforded Fletcher Building a unique position in the market place Appendix A Delphi Study Round One 284 to generate superior cash flows. Contrast this to Crane (a recent acquisition of Fletcher Building), where the value generated for shareholders was substantially less over the years. E25 Information flows freely in all directions. People feeling empowered to react to changes when they see it is needed. Leadership that gives framework and guidance to signal the direction in which change needs to be directed. Incentive structure needs to rewards the sharing of information. Collaboration aspect needs to be built into the incentive structure. Procedures need to support innovation and change rather than hinder it and cement the status quo. Incentives focus on outcome not on the how it is achieved. E26 It must be flexible and agile to be able to take advantage of changing environments and opportunities or to respond defensively to emerging threats. It therefore should build a culture that is entrepreneurial and accepting of change, proactively experimenting with new ideas and probing the environment research and trials. E27 Outstanding leadership Innovation Respect for clients/customer vision of its place in society knowing why they exist (making a profit is a result not a reason). E28 Customer focused, some members looking at the big picture - the potential future, flexible staff that accept change. E29 Low inventories. No unproductive of unnecessary labour. Good cash flow. Ability to change quickly. Products or services offered change over time to meet the market demand. Have a loyal customer base. E30 A customer focus based on customer visibility through mechanisms such as market research to understand the customer. A lean organisation philosophy. Decentralised structure with local decision making at the non-strategic organisational levels with result driven influence at the strategic level (localised results such as profits has an influence on head office/corporate strategic decisions). Sound capital base with adequate financial resources/cash flow. Cut the overheads and increase margins by understand costs to compete head on such as local manufactured product costs. Question 4: What are the core ELEMENTS that enable an enterprise to be adaptive? Response Count: 30 Number Response Text E1 Educated staff and b) Visionary management. E2 Ruthlessness, Lack of ethics. E3 Leadership, Innovation, Standardisation, Invention. E4 Innovative and open-minded people. System that is not rigid. Some risk taking behaviour and encouragement for that in new ventures. E5 Flexible HR policies. Managers with a strategic view. Managers and staff understand the market and industry and can see change coming. Staff willing to show initiative. Management structure that allows staff to show initiative. Strong Leadership devolved decision making. Reward system that allows adaptability (monetary or non-monetary). Good training and education policies. A structure that allows flow of knowledge. Means to keep staff informed about strategic direction and key goals. Early adopters of technology. E6 Effective marketing – keeping up with trends and changes within the market place 2. Quality – lean principles and processes to develop a quality outcome 3. Remove cost and stream line processes to provide value to the customer 4. Developing people 5. Growing and guarding intellectual property. E7 Fully fledged ERP system and IT staff knowledgeable of the Business Process and the Business requirements. Change Management policy. R&D division to analyse and report on external and internal environment of the organisation. Appendix A Delphi Study Round One 285 E8 Not sure. E9 Sufficient resources (human, financial, time, and the materials needed to be able produce the product and / or service); Flexible systems / procedures that allow the organisation to deviate from producing the same product/service repeatedly. Broad parameters of expected behaviours from personnel rather than rules that restrict behaviour. Support systems that encourage experimentation (this includes opportunities for development / training). Reward systems that acknowledge successful performance. E10 Strategic planning and review, empowerment, proper measurement systems against critical success factors, and a systems perspective. E11 Talented staff, great leadership, flexible processes, scalable open technology, continuous feedback, distributed change mechanisms. E12 Strong external networks of relationships and contracts. Flexible workforce (able to be laid off, hired rapidly)-Flexible labour practices. Alternatively, highly-skilled workforce that can be redeployed. Strong understanding of their business processes, routines, capabilities. Leader (or leadership team) with vision, willing to take a punt or lead a change in direction. Leader with persuasive skills to ensure employees can be ‘brought on board’ with the change. Strong formal and informal communication networks within the company. (Both on a vertical low employee to executive, and also horizontally, among low-level employees). Strong inter-firm networks enabling access to various resources or capabilities that can be utilised as/when required. Carefully selected trading partners that exhibit the ability to be adaptive - critical due to the highly fractured nature of current supply chains that we see in many industries currently. E13 Management support, performance measurement, SC-setup, employee incentives, internal/external communication, SC relationship management. E14 Good Communication within the company, clarity on goals to be achieved and any change to goals that is required. (2) Good Performance Measures which enable true monitoring; not just measure what is easy to measure, but what is truly important; (3) Reliable Data both (a) internal: ensuring data is not 'fudged' - departments/staff shifting the blame of lack of performance onto another area; (b) external: true data for indicators of critical factors such as prices, market changes (4) A 'good' strategic plan, that is reviewed regularly and adjusted if required (5) Correct tools and methodologies in place to implement any changes through all the company, right down to the operational level, to make it easy for all staff to work in a new paradigm or manner. E15 A sales and account management process that is connected to the strategic management of the firm. R&D is done together with customers - not in isolation A networked vie w of value creation - the firm does not need to produce everything that it provides to customers. E16 Clearly articulated strategy. The company has to know what it's goals are. Good KPIs & incentives. The performance metrics & incentives for all staff have to be such that they encourage responsiveness and eliminate any waste and overhead. Excellent communication. Everyone in the company needs to have a shared understanding of both the current situation and the current strategy. Excellent integrated Information Systems - the ability for any employee to immediately access good quality, real-time data relevant to their particular job Culture that embraces change and welcomes new ideas. E17 To equip senior management and top decision makers with the best equipment and tools to prevent indecision whenever possible. To equip as many staff as possible with the right tools and information such that all staff can trust management decisions. To provide the means for continued education to as many members of an organisation as possible and appropriate. E18 Sound infrastructure /structures. Policies and procedures. Governance and leadership. E19 Capital, human and financial, customer demand, good will. E20 Rewards and recognition based upon clearly understood objectives and principles of the organization. A job description and induction process is unambiguous. Transparency with regard to opportunities to grow and excel in the organisation. Open lines of communication from top to Appendix A Delphi Study Round One 286 bottom. Independence to make decisions within the staff member's roles and responsibilities. Organisational willingness to accept errors of judgement when clearly made to progress the goals of the enterprise and within the accepted bounds of commercial opportunities. As well as moral and ethical standards universally applied to all internal and external relationships. E21 Responsive systems. People who can cross organisational/functional boundaries. People who have the ability to make decisions and the power to push them through. E22 Decentralised structure, good leadership, responsible staff/people, knowledge and information seeking activities (externally oriented), open innovation activities. E23 Governance structure. Top executives. Well connected ICT and knowledge management infrastructure. Continuous improvement and developments of products and services. Size - small versus large Resource - people, money, and technology Industry sector - highly regulated versus non-regulated. Ownership - Government, public and private. E24 As mentioned (characteristics), mechanisms for increased decision making, a funding allocation structure that allows CAPEX to be allocated for initiatives to adapt to the environment e.g. the 1SAP program at BHP. In addition, for all practical purposes, ability to have a strong balance sheet that enables the company to make acquisitions and mergers as necessary. E.g. Fletcher Building's recent acquisition of Crane and Westpac's acquisition of St. George's Bank. Note that Westpac allocated $1.85 billion over three years to upgrade its IT systems. Aside from the ability to raise capital, from my observation, it is the linkage between programs of work, enterprise architecture and strategy. In large organisations, this linkages need to be enforced through financial sign offs and other managerial sign offs. In BHP Iron Ore, for example, new investments need to have an information management plan or it would not be signed off. This mechanisms forces the involvement of enterprise architecture sign off to future proof the enterprise to be adaptive in some respects (albeit this is my interpretation of what they were trying to do). In other contexts such as the New South Wales government, the political selling point has been creating better services for the citizen through government agencies and the political message of reducing public expenditure. These form the business case externally and internally that justify certain courses of action. E.g. certain decision and reports from consultants will be relied on e.g. a consulting firm's report previously endorsed decision by the leadership. E25 Flexible processes. Policy need to be open to give some flexibility. Resources allocated to change management and managing projects that are aligned with the identified new direct and goals. Resources to analyse the raw data that they have to justify the changes. Feedback loops and performance measurement systems. Have up-to-date and capable information system and people expertise. Allocate resources to change programmes. Business processes that are understood and easily adjusted. E26 Entrepreneurial rather than bureaucratic. Innovation as a core competency. Information management as a capability. Development of people in terms of skills and ability. Advanced supply chain management systems. Knowledge management as a core competency. Highly developed communication and collaboration systems. Fast transaction speed (real time). Fast operational process. E27 How they do things. Establish strategies to achieve their vision. Employees that believe in the vision of the enterprise. Openness/ transparency. Respect for the customer/client. E28 I think this was covered above in Characteristics. E29 Introduction of new technologies such as Enterprise IT system. E30 Up-to-date IT infrastructure to support information and process requirements. Access to sufficient local/head office financial resources to fund enterprise wide development. Agility to enable quick response. Appendix A Delphi Study Round One 287 Question 5: How do these core ELEMENTS work together to enable an enterprise to be adaptive? Response Count: 30 Number Response Text E1 Vision, strategy is communicated. Staff given autonomy to execute vision and strategy. E2 By considering only the principals of the organisation, not the customers, suppliers, staff, owners (if a sharemarket/unit listing and the like) and other stakeholders. E3 Process orientation. Service orientation. Governance Management. E4 Innovative and open-minded people to craft the ideas for moving the enterprise forward and be able to utilize a flexible enough system to engage in new activities. E5 Changes in demand or changes industry structure or new technology is seen by outward focused staff or managers with a strategic view. Staff are encouraged to show initiative so they inform managers of possible changes coming. Managers have the ability to put information together from different sources and foresee changes. Strong leadership and open information flow allow managers to keep staff informed of possible changes ahead. Staff are encouraged to use initiative and come up with ways to adapt and change. Good training and education policies allow staff to be retrained easily. Flexible HR policies allow staff to change roles easily. Change is seen as a positive so staff are encouraged to accept change and are rewarded for doing so. Good trust between managers and staff reduce negative outcomes of change. E6 This is driven from the company vision through the beliefs values and aligned to the business plan. To measure the performance and key milestones within the business plan, this is measure by short term incentives or key objectives. Through reviewing and gaining alignment, this will identify if there is a ‘measureable gap’ in performance. If a gap is realised, then a back on track performance plan can be implemented. E7 When there is a good R&D team, they can report on external and internal environment and upcoming changes. Such as a decrease in sales orders for next year. Also the worsening economic climate. So the senior management can make necessary measures such as different marketing strategy, introducing new product lines or start looking at other markets etc. Or they can exit from certain product lines and concentrate on a niche range of products. So such changes should be communicated effectively to all levels. During such a change staff commitment and motivation should be high. Also, there should be a willingness within the organisation to change and adapt to given conditions. This can be achieved through proper change management generally initiated through HR (Acting as the change agent). Also to support these dramatic environmental changes an ERP system should be customized as well. When you introduce a new product line or during reporting and analysis, the ERP system should be able to generate various kinds of queries etc. It can save a lot of time and money. Also will enable the organisation to adapt to changes in the environment. E8 Don’t know. E9 People require the right environment / culture, the necessary resources, the right ability/skills, the opportunity / flexible systems, and lack of penalties or provision of rewards to encourage / enable the organisation to be adaptive (or to be innovative which is more than just adapting). E10 The systems perspective needs to be promulgated to all, and provide a framework for strategy development, implementation and monitoring. The employees need to buy into the system purpose and objectives and be empowered and resourced to play their role and to fix the system (not the blame). This empowerment must allow for feedback loops that are honest and integrate with a measurement system that ensures proper monitoring and control. E11 Feedback informs, flexibility allows early change adoption, leadership encourages innovation and change, open scalable technology supports rapid change. E12 External information is sensed and shared within the enterprise using dense communication channels. Leader(ship team) is able to make sense of these data and beliefs, develop a new vision. Appendix A Delphi Study Round One 288 Leader is then able to communicate this vision persuasively to individuals or teams within the enterprise. Teams are able to then analyse existing processes/routines/capabilities, analyse which may be unnecessary or unwanted, and then change these to new capabilities, while redeploying workforce and technology to do so. Changes in direction for the enterprise can be ‘leaked’ through inter-enterprise linkages, either through explained changes to contracts, new contracts, or information/communication between individuals in different enterprises, allowing a network-wide response to emerge. E13 If an enterprise is able to assess external requirements and their own ability to fulfil these requirements, they will be able to derive plans, targets to drive their strategic and operational decisions and furthermore manage appropriate responses to competitors and their offerings. E14 Good strategic planning enables defining and clarifying the company's goals - Having staff encouraged to think outside the square will help shape that strategy (2) Good communication enables ensuring everyone in the company knows what they are expected to do to achieve those goals - Positive staff attitude towards change will ensure that staff are listening and implementing the plan (3) Good performance measures enable monitoring how everyone is progressing towards those goals - Passion and pride in the enterprise will ensure staff perform well (4) Reliable data ensures that the performance measures are correctly represented. - Trust in Management will ensure staff are not afraid to report data correctly and will not hide mistakes (5) Correct change methodologies ensure change gets done and implemented - Happy staff that feel well valued are an important factor in this. E15 A 'lego-block' modular business model set-up that enables the firm to do quick business model changes Cross-functional processes that hinders functions to become isolated. E16 Strategy comes first and set the direction, then the KPIs & incentives have to align with that to ensure that everyone's behaviour is helping towards that strategy. Communication is important so that everyone clearly understands their strategy, their KPIs and how everything fits together. The information system provides the necessary information for everyone to see how they are progressing with their KPIs as well as probably facilitating the communication. Having a culture that embraces change means that everyone is receptive to improvements in the process, including changes in KPIs when the information dictates that is necessary. E17 The fewer adaptive decisions that need to be made the better and that it is first the role of senior management to make adaptive decisions. So it is best when systems exist that can accurately assist and communicate these decisions with other members of the staff. This necessitates that everyone involved be as informed and educated to the reason for decisions as possible. E18 The core structures allow the enterprise to adapt to changes in the environment by assessing it's key strengths. With good leadership and governance the enterprise is given guidance on how best to adapt. E19 Everything starts with a demand, which can be fulfilled with capital and encouraged through good will. E20 It probably starts with the initiation of a new employee who is first inducted via a comprehensive program describing the ethos of the organisation and then his or her role within the organisation. As the person gains experience and knowledge of the organisation the behaviour of their peers will be consistent with the elements listed above. In other words, the person is not only told about their work environment but observes the elements in action. The consistency from the CEO down speaks volumes. E21 Responsive systems allow information to flow freely from the external environment and within the organisation. Key people are able to use this, to share it and to make the changes in the products and processes to adapt to the changing conditions. Resources will often make a difference as to the speed with which this is done. E22 Structure - Decentralised organisation enabling different actors/people/customers/partners/ suppliers to interact Activities - interaction, innovation ,market facing activities, change activities, addressing need for business model change Resources - competence, leadership qualities, decision Appendix A Delphi Study Round One 289 making, responsibility, tools etc. The structure is made up of actors, which in turn enable/carry out activities. The activities are facilitated by resources and competence. E23 Top management continuously scan the environment and market forces such as new and emerging technologies, competitive products and services, stakeholders' influences, etc. and prepare strategic plan that includes response/adaptation plan, communicate the plan to all levels of organisations, undertake projects and allocate resources. E24 As mentioned, strategy, architecture and program of work management, funded by a capital allocation process, work together to ensure a mechanism of adaptiveness for the enterprise. Access to capital to do this on a wider scale is essential as illustrated by Westpac. On acquisitions, a similar mechanism is used. However, the architecture is not as IT oriented in many cases. E25 The enterprise should be an integrated, living, system. Information is shared and helps align the different parts of the organisation so they are all focused on the same goals (pulling in the same direction). Data information and analysis of information enable the organisation to learn and establish a new directions and new ways of doing things. E26 They enable the enterprise to leverage its tangible and intangible assets by developing organisational capability. This enhanced capability then leads to the organisation being able to develop a core competency in positive reaction time E27 The four elements above work together to create a cohesive environment where all participants feel part of the enterprise or believe in the vision of the enterprise. E28 Teamwork is essential and trust that management changing things and the continuous improvement culture is what one has to do to. E29 New technologies are introduced to remove unnecessary wastage be it materials or labour. The Enterprise IT system is used to manage planning workflows ordering etc to also minimise wastage. The system effectively manages the technologies within the enterprise to produce only what there is demand for. E30 Understands customers’ needs through access to market information. Constant efforts to minimise operating costs to remain competitive. IT infrastructure to support changing business requirements and deliver operational cost reduction along with process flexibility. Access to capital to fund the necessary organisational changes. Flexible HR. Question 6: Please feel free to mention any other aspect of adaptive enterprises. Response Count: 16 Number Response Text E5 Managers who constantly reassess their organisations position in the market or industry, can see change early and be ready to adapt. E7 An adaptive enterprise will continually simplify its business processes. This will make it easy for the organisation to change quickly and adapt to a certain environment. Also, it will attract customers more and more as their processes have become easier for customers. E10 The most important assets don't show up on the balance sheet; they are called people. E11 A culture that fosters innovation and openness. E12 Much seems to depend on the ability of external partners to change, or the ability of the firm to locate new effective partners that are able to complement the firm as it changes. E13 I would argue that the strength of the SC is crucial for the level of adaptability of an enterprise. Today most of the value of a product or service is derived through the SC. Hence, the selection of strong and innovative partners as well as SC relationship management are crucial to ensure an adaptive enterprise. There is certainly overlap with the theory on agile SCs. Many recent examples Appendix A Delphi Study Round One 290 show that companies still neglect the importance of their SCs for their own success, e.g. Boeing (Dreamliner). E14 The ability to adapt must be present at all levels of the enterprise, no good if the CEO has a great vision on change, while lower managers are scared of any change, and are therefore resisting the change rather than embracing it. When the fear of losing their job becomes greater than the desire to embrace any changes the overall goal is not easy to achieve. Generally, the impression is that getting people to adapting to change is like stirring a pot of glue. Enablers need to be in place to keep the system fluid, so change becomes natural, rather than a threat. E15 It needs to have managers on the top that believe in change and are ready to make big changes to existing business model. E17 Given that children now carry in their hands the communications power of a smart phone, the information and knowledge of the Internet as well as access to social networks that are global, enterprises of tomorrow need to be adaptive as possible. Management studies evolved from old school to new school to something all together different today. Just think what will be around the corner. E18 Enterprises rarely willingly adapt without external influences. E19 An adaptive enterprise must also undertake 'scenario-planning' to prepare itself for any changes that occur in the internal and external environment. E20 Some organisations realize that all great ideas don't come from the top or from the research and development team. One of the ways to encourage innovation is to allow some portion of the work week to try new ideas, develop new procedures, demonstrate different work flows, etc. A climate that is open to change and even half-baked ideas is healthy. Increasingly organisations do not have control over their entire production cycle, for example raw materials in and finished product out. Sub-contractors, wholesalers, sales outlets, and customers all have a role in keeping the enterprise innovative, competitive and healthy. Where it makes sense, either our staff are encouraged to work outside the enterprise or staff from outside the enterprise are encouraged to provide input with our organisation . In an environment that is increasingly, multi-cultural and multi-national the more opportunities to understand the commercial environment would be vital to understanding the need to adapt and developing the strategies & techniques to do so. E24 Theoretically, there are many things I can mention. However, I think you might want to read about strategic learning cycles as well as works by Bo Hedberg (1981), Argyris and Schon (1996) and Senge (1990) the latter of which I believe you are familiar with. From a practical practitioner perspective, it is about finding the value proposition that would give a unique position and competitive advantage to the firm, generating superior profits and long term market positions. In addition, the concept adaptation needs to be defined in more detail in this case, because there are levels of adaptation, something outlined in my answer to question 2 but not really defined in your questionnaire. E26 Enterprises will go through some form of transition phases as they adapt to new or changing environments. There are usually three phase - (1) letting go of the 'old', (2) entering a neutral zone of uncertainty experimentation and trial, and (3) finding the new way forward. The speed at which an enterprise can manage its way through this transition can become the determinant of being truly adaptive. E27 I have problems with the concept of adaptive enterprise since two of the most successful enterprises, Sony and Apple, have simply made a new environment for themselves having some of the characteristics and elements mentioned above. If adaptive means to be able to survive in a constantly changing environment, the above characteristics and elements still apply. E28 I believe enterprise adaption is a continuous process. E30 Planning is critically important at all levels of the enterprise. Although there is a need to be agile and responsive implementing the response/change successfully is always challenging. Many different organisational and external elements need to be aligned and brought together. Appendix A Delphi Study Round One 291 Appendix A7 Nuggets Nuggets from Responses to Question 2 1. Proactive planning and execution (E1) 2. Rationalising staff (E2) 3. Adopting unethical procedures (E2) 4. Change strategies and goals (E3) 5. Change business processes (E3) 6. Change technologies (E3) 7. Change products and services (E3) 8. Proactive approach (E4) 9. Lead role in innovation practices (E4) 10. Anticipate through long range planning (E4) 11. Future proof systems (E4) 12. Reactive approach (E4) 13. Rationalising HR (E4) 14. Rationalising Operations (E4) 15. Rationalising Marketing (E4) 16. Restructure enterprise (E5) 17. Rationalising staff skillsets (E5) 18. Redefine staff roles (E5) 19. Retrain staff (E5) 20. Recruiting required skillsets (E5) 21. Rationalising equipment (E5) 22. Adopt new technology (E5) 23. Access to competitive information (E6) 24. Business outcome requirements (E6) 25. Customer satisfaction information and KPIs (E6) 26. Business alignment from top management to shop floor (E6) 27. Shared Vision with tangible measured outcomes (E6) 28. Shared business plan with tangible measured outcomes (E6) 29. Shared budgets with tangible measured outcomes (E6) 30. Shared stakeholders expectations with tangible measured outcomes (E6) 31. Measurable key objects (E6) 32. Continuous performance review lifecycle (E6) 33. Adjust to environmental change based on performance reviews (E6) 34. Aligns with the current economic climate (E7) 35. Knowledge and idea of up-coming environmental changes (E7) 36. Change is driven by top management (E7) 37. Knowledge about longer-term future environmental change (E7) 38. Knowledge of macro and micro economic changes (E7) 39. External and internal changes communicated to staff in advance (E7) 40. Advanced planning for predicted changes (E7) 41. Management commitment (E7) 42. Commitment and buy in of all staff members (E7) 43. High staff motivation and skills (E7) 44. Senior management sensitive to environment changes (E7) 45. Senior management able to analyse situations and provide solutions (E7) 46. Early recognition change is needed (E8) Appendix A Delphi Study Round One 292 47. Understand depth of changes (E8) 48. Implications of the change on customers and staff (E8) 49. Analyse the risk of change (E8) 50. Leadership essential to guide change (E8) 51. Responding to cues from the market (E9) 52. Adoption and use of superior resources (E9) 53. Flexibility to meet individual customer needs (E9) 54. An enterprise systems perspective (E10) 55. Understanding the environments (E11) 56. Regular feedback loops provide information about market and competition (E11) 57. Flexible policies for each business component (E11) 58. Flexible and scalable technology capability (E11) 59. Capturing and fostering staff talent (E11) 60. Leadership to realise opportunities (E11) 61. Perceive changes to external environment (E12) 62. Individuals perception of environmental changes are subsumed by enterprise groups creating a sensory method (E12) 63. Awareness foundation for action (E12) 64. Reconfigure resources and capabilities to match requirements (E12) 65. Contractual arrangements and relationships to acquire resources and capabilities (E12) 66. Understand environments e.g. customer demands, challenges, competition, etc. (E13) 67. Assess enterprise situation from a systems perspective including capabilities, resources, targets, existing plans, etc. (E13) 68. Environmental awareness through performance measurement and appropriate feedback among decision makers (E13) 69. Market information to evaluate need to follow or innovate (E14) 70. Innovative business practices to keep ahead of perceived market developments (E14) 71. Proactively explore new options or technology (E14) 72. Staff proactively involved in adaptation to ensure buy-in (E14) 73. Knowledge as a resource (E14) 74. Information for deep customer insight (E15) 75. Continuously monitor and understand need for business model change (E15) 76. Quick response to change (E16) 77. Flexible and adaptive BP to enable quick response (E16) 78. Look beyond senior management, trusted strategies and stated goals (E17) 79. Knowledgeable, educated work force (E17) 80. Powerful business tools (E17) 81. Seek opinions from all staff (E17) 82. Military style 'old school management' replaced by friendly 'new school' management style (E17) 83. More open and trusting of employees (E17) 84. Flexible in conducting its core business (E18) 85. Undergo internal structural changes in processes and functions in response to hostile external environment (E19) 86. Clear goals, objectives, focus (E20) 87. Monitoring internal and external environments (E20) 88. Analytics to determine external and internal performance (E20) 89. Staff committed to organisational objectives (E20) 90. Leadership required to realise the perceived need to change (E20) 91. Reactive, forced, change to products, processes, technological and people (E21) 92. Proactive change through innovative people to products, processes, and technological (E21) Appendix A Delphi Study Round One 293 93. Decentralised structure allows functions to interact with external environment (E22) 94. Active leaders influence and exert control on environmental factors such as policy development, financial regulations, etc. (E23) 95. Adaptation processes strategically planned and designed (E23) 96. Incremental changes through realigning resources, processes and technologies (E23) 97. Delegates to regional operations for adaptation but monitors from the corporate head office (E23) 98. Change management strategies (E23) 99. Risk management strategies (E23) 100. Research and innovation (E23) 101. ICT infrastructure and strength (E23) 102. Strong management team (E23) 103. React to financial stimulus (E24) 104. Rationalise staff (E24) 105. Restructuring (E24) 106. Create new roles (E24) 107. Current employees perform role requirements as they arise (E24) 108. Install correct leadership (E24) 109. Change middle management (E24) 110. Resource/fund allocation to realise change (E24) 111. Engage in mergers and acquisitions to acquire knowledge, resources, new product lines and eliminate competition (E24) 112. Flexibility organisational structure (E25) 113. Low levels of bureaucracy (E25) 114. Not too many prescriptive policies and rules (25) 115. Internal vertical communication about environments to raise awareness of required changes (E25) 116. Changes organisational design to align with changing environments (E26) 117. Changes business model to align with changing environments (E26) 118. Changes Supply Chain networks to align with changing environments (E26) 119. Changes process design to align with changing environments (E26) 120. New competencies and capabilities to align with changing environments (E26) 121. New cultures to align with changing environments (E26) 122. New product/service design to align with changing environments (E26) 123. New channels to market to align with changing environments (E26) 124. New innovations supporting customer experience to align with changing environments (E26) 125. Innovates and transforms the market (E27) 126. Creates innovative business models for market transformation (E27) 127. Focus on customer (E28) 128. Focus on new trends and market influences that may threaten profitability (E28) 129. Match supply and demand of the goods and services (E29) 130. Optimise resources by eliminating all forms of waste (E29) 131. Change driven by top management (E30) 132. Visibility of cost structures and customer needs for effect decision making (E30) 133. Decision making based on accurate internal and external information (E30) 134. Organisational focus on understanding and satisfying main customers (E30) Nuggets from Responses to Question 3 1. Agile (E1) 2. Little hierarchy (E1) 3. Innovative (E1) Appendix A Delphi Study Round One 294 4. Minimal bureaucratic procedures (E1) 5. Educated staff (E1) 6. Staff have autonomy to make changes as required (E1) 7. Ability to downsize and upsize staff, resources and locations (E2) 8. Flexible work hours and conditions (E2) 9. Outsourcing and insourcing, short term and casual arrangements (E2) 10. Virtual organisation downstream and upstream (E2) 11. Flexibility (E3) 12. Adaptability (E3) 13. Agility (E3) 14. Constant monitoring of the environment (E4) 15. Good multi-skilled workforce (E4) 16. Leaders encourage change for the better (E4) 17. Leaders discourage 'business-as-usual' mentality (E4) 18. Employees happy to interact with people from other departments (E4) 19. Employees happy to interact with people external to the organisation (E4) 20. Flexibility (E5) 21. Forward thinking outward looking (E5) 22. Value staff initiative (E5) 23. Encourage retraining (E5) 24. Encourage knowledge transfer (E5) 25. Positive outlook to change (E5) 26. Empowered staff to make decisions (E5) 27. Honest with staff about change (E5) 28. Quick learning (E5) 29. Willingness to take on change (E5) 30. Trust between managers and staff (E5) 31. Living the company vision (E6) 32. Becoming a ‘trend setter’ leading the way in current markets (E6) 33. Shared values and beliefs that drive a ‘can do attitude’ (E6) 34. Reviewing performance (E6) 35. Implementing action plans to ensure alignment of business outcome (E6) 36. Accountability and ownership (E6) 37. Willingness for change (E7) 38. Motivated Staff (E7) 39. Knowledge of operating industry (E7) 40. Technically aligned (E7) 41. Leadership (E8) 42. Good information and reliable data (E8) 43. Good communication networks (E8) 44. Flexibility (E8) 45. Organisational culture (E9) 46. Human resources with relevant knowledge and skills (E9) 47. Purpose driven, strategically managed environment that empowers and drives innovation (E10) 48. Flexible (E11) 49. Open (E11) Appendix A Delphi Study Round One 295 50. Aware (E11) 51. Creative (E11) 52. Independent (E11) 53. Innovative (E12) 54. Willing to change (E12) 55. Flexible and open to new concepts and ideas (E12) 56. Awareness of external changes/shifts (E12) 57. Ability to perceive multiple future scenarios not obvious and based on current business trajectory (E12) 58. Ability to rapidly change skills or resources (E12) 59. Company culture that embraces change (E13) 60. Adequate resources (E13) 61. Unique knowledge and capabilities (E13) 62. Internal and external visibility (E13) 63. Strong Supply Chain relationships (E13) 64. Innovative Supply Chain partners (E13) 65. Positive attitude toward change as: • enterprise wide • employees not afraid of embracing change as a welcomed challenge • required to keep business going (E14) 66. Trust in Management (E14) 67. Openness about weaknesses or of what needs to improve (E14) 68. Passion and Pride for what the enterprise it is doing and achieving (E14) 69. No narrow personal goals of 'getting head', backstabbing etc. (E14) 70. Encouraged staff that will think outside the square (E14) 71. Confident and supportive managers to enable the new ideas to be explored (E14) 72. Staff happily take on any changes as they see the overall benefit of the change (E14) 73. An 'outside-in' mindset (E15) 74. Understand how the market works and how it can be influenced (E15) 75. Position itself in the market network (E15) 76. High productivity and low waste (E16) 77. Empowered employees (E16) 78. Integrated all parts are chasing the same vision (E16) 79. High quality decision making (E16) 80. React quickly and effectively to threats and opportunities (E16) 81. Create new value while following the principles of the organisation (E17) 82. All members involved in adapting (E17) 83. Flexibility (E18) 84. Innovative (E18) 85. Enterprising (E18) 86. Growth (E19) 87. Market position (E19) 88. Brand image (E19) 89. Staff that are: • flexible • adaptive Appendix A Delphi Study Round One 296 • focused • committed • responsible • self-starters • show initiative (E20) 90. Organisation recognises and rewards staff with above characteristics (E20) 91. Flexibility (E21) 92. Intelligence (E21) 93. A culture that values adaptively (E21) 94. Respond to changing customer demand (E22) 95. Innovate (E22) 96. Strong and timely communication (E23) 97. Defined corporate and business strategies (E23) 98. Strong leadership team (E23) 99. Research and innovation (E23) 100. Delegation and authority to adapt at various management levels (E23) 101. Proactive and motivated employees (E23) 102. Learning organisation (E23) 103. Forecast the future (E24) 104. Implement projects, programs and mechanisms to realise forecasts (E24) 105. Mechanisms that bring information quicker for decision making (E24) 106. Learning organisations (E24) 107. Learning culture requires sustained leadership (E24) 108. Mergers and acquisitions (E24) 109. Build core competencies (E24) 110. Value chain control (E24) 111. Information flows freely in all directions (E25) 112. People empowered to react to changes when needed (E25) 113. Leadership provides framework and guidance to signal direction of required change (E25) 114. Incentive structure rewards: • information sharing • collaboration • focused on outcomes 115. Procedures to support, rather than hinder, innovation and change (E25) 116. Flexible and agile to take advantage of changing environments or respond defensively to emerging threats (E26) 117. Entrepreneurial culture (E26) 118. Accepts change (E26) 119. Proactively experiments with new ideas, probing the environment through research and trials (E26) 120. Outstanding leadership (E27) 121. Innovation (E27) 122. Respect for clients/customer (E27) 123. Vision of place in society (E27) 124. Knowing why they exist (making a profit is a result not a reason) (E27) Appendix A Delphi Study Round One 297 125. Customer focused (E28) 126. Some members looking at the big picture, the potential future (E28) 127. Flexible staff that accept change (E28) 128. Low inventories (E29) 129. High productivity (E29) 130. Good cash flow (E29) 131. Products/services change to meet market demand (E29) 132. Loyal customer base (E29) 133. Understand the customer’s perception through market research (E30) 134. Lean manufacturing focus through reducing variety and cutting operating costs such as overheads (E30) 135. Decentralised structure allows for some autonomy from head office (E30) 136. Localised results influence head office strategic decisions (E30) 137. Sound capital base with adequate financial resources including cash flow (E30) Nuggets from Responses to Question 4 1. Educated staff (E1) 2. Visionary management (E1) 3. Ruthlessness (E2) 4. Lack of ethics (E2) 5. Leadership (E3) 6. Innovation (E3) 7. Standardisation (E3) 8. Invention (E3) 9. Innovative and open-minded people (E4) 10. Organisational system that is not rigid (E4) 11. Some risk taking behaviour (E4) 12. Encouragement for new ventures (E4) 13. Flexible HR policies (E5) 14. Managers with a strategic view (E5) 15. Managers and staff understand the market and industry and can anticipate change (E5) 16. Staff willing to show initiative (E5) 17. Management structure that allows staff initiative (E5) 18. Strong Leadership (E5) 19. Devolved decision making (E5) 20. Reward system that recognises adaptability (E5) 21. Good training and education policies (E5) 22. Structure that allows knowledge flow (E5) 23. Mechanisms to keep staff informed about strategic direction and key goals (E5) 24. Early adopters of technology (E5) 25. Effective marketing that enables keeping up with trends and changes (E6) 26. Quality and lean principles and processes to develop cost effective, quality outcomes (E6) 27. Remove costs and streamline processes to provide value to the customer (E6) 28. Developing people (E6) 29. Growing and guarding intellectual property (E6) Appendix A Delphi Study Round One 298 30. Fully fledged ERP system (E7) 31. IT staff knowledgeable about the business process and business requirements (E7) 32. Change Management policy (E7) 33. R&D division to analyse and report on external and internal environments (E7) 34. Sufficient resources: human, financial, materials etc. (E9) 35. Flexible systems and procedures that allow deviation from the usual product/service (E9) 36. Broad parameters of expected behaviours from personnel rather than rules that restrict behaviour (E9) 37. Support systems, development and training that encourage experimentation (E9) 38. Reward systems that acknowledge successful performance (E9) 39. Strategic planning and review (E10) 40. Empowerment (E10) 41. Correct measurement systems for critical success factors (E10) 42. Systems perspective (E10) 43. Talented staff (E11) 44. Great leadership (E11) 45. Flexible processes (E11) 46. Scalable open technology (E11) 47. Continuous feedback (E11) 48. Distributed change mechanisms (E11) 49. Strong external networks of relationships and contracts (E12) 50. Flexible labour practices that allows rapid lay-offs and hires (E12) 51. Highly-skilled workforce that can be redeployed (E12) 52. Strong understanding of enterprise’s business processes, routines, and capabilities (E12). 53. Leader and leadership team with vision, willing to take risks and lead a change in direction (E12) 54. Leader with persuasive skills to ensure employees agree with the change (E12) 55. Strong formal and informal, vertical and horizontal, communication networks (E12) 56. Strong inter-firm networks enabling access to resources and capabilities that can be utilised as and when required (E12). 57. Carefully selected trading partners that exhibit the ability to be adaptive (E12) 58. Management support (E13) 59. Performance measurement (E13) 60. Supply Chain setup and Supply Chain relationship management (E13) 61. Employee incentives (E13) 62. Internal and external communication (E13) 63. Good internal communication (E14) 64. Establish clarity of goals and the changes that are required (E14) 65. Performance measures which enable accurate monitoring of critical success factors (E14) 66. Reliable internal data: true indicator of departmental and staff performance (E14) 67. Reliable external data: true indicator of critical factors (E14) 68. Good strategic plan, reviewed regularly and adjusted if required (E14) 69. Correct tools and methodologies to implement changes throughout the enterprise and enable all staff to work in a new paradigm or manner (E14) 70. Sales and account management process connected to strategic management (E15) 71. R&D not performed in isolation done together with customers (E15) Appendix A Delphi Study Round One 299 72. Business networked view of value creation (E15) 73. Clearly articulated strategy (E16) 74. Good KPIs and incentives that encourage responsiveness and eliminate waste and overhead (E16) 75. Excellent communication (E16) 76. Shared understanding of both current situation and strategy (E16) 77. World class integrated IS (E16) 78. Employee access to real time, quality, role relevant, data (E16) 79. Culture that embraces change and welcomes new ideas (E16) 80. Best tools and equipment to support quick, effective, decision making by senior management and key decision makers (E17) 81. Equip all staff with the right tools and information (E17) 82. Staff trust management decisions (E17) 83. Continued education of employees (E17) 84. Sound: • structures • infrastructure • policies • procedures • governance • leadership (E18) 85. Capital both human and financial (E19) 86. Customer goodwill (E19) 87. Rewards and recognition based upon clearly understood organisational objectives and principles (E20) 88. Clear job description and induction process (E20) 89. Transparency of opportunities to grow and excel in the organisation (E20) 90. Open, top to bottom, lines of communication (E20) 91. Independence to make decisions within the staff members roles and responsibilities (E20) 92. Accept errors of judgement when made to progress the enterprise goals and within the accepted commercial opportunities and moral and ethical standards applied to all internal and external relationships (E20) 93. Responsive systems (E21) 94. People can cross organisational/functional boundaries (E21) 95. People have the ability to make decisions and the power to push them through (E21) 96. Decentralised structure (E22) 97. Good leadership (E22) 98. Responsible staff (E22) 99. Externally oriented information and knowledge seeking activities (E22) 100. Open innovation activities (E22) 101. Governance structure (E23) 102. Top executives well connected (E23) 103. Up to date ICT and knowledge management infrastructure (E23) 104. Continuous improvement (E23) 105. Continuous development of products and services (E23) 106. Size matters - small versus large (E23) 107. Amount of resources: people, money and technology etc. (E23) Appendix A Delphi Study Round One 300 108. Industry sector - highly regulated versus non-regulated (E23) 109. Ownership - Government, public and private (E23) 110. Mechanisms for increased and effect decision making (E24) 111. Funding structure allocations to initiatives to adapt (E24) 112. Strong balance sheet (E24) 113. Ability to raise capital (E24) 114. Up to date IT systems (E24) 115. Linked strategy, enterprise architecture and programs of work (E24) 116. Strategy, enterprise architecture and work programs linkages enforced through financial and managerial sign-offs (E24) 117. Business cases (external and internal) to justify certain decisions and courses of action (E24) 118. New investments need information management plan for sign-off (E24) 119. Consultant reports to endorse and support leadership decisions (E24) 120. Flexible processes (E25) 121. Open policies for flexibility (E25) 122. Resources allocated to change management (E25) 123. Resources allocated to projects aligned with new direction and goals (E25) 124. Analyses supports and justifies changes (E25) 125. Feedback loops and performance measurement systems (E25) 126. Capable IS and technical people expertise (E25) 127. Business processes that are understood and easily adjusted (E25) 128. Entrepreneurial rather than bureaucratic (E26) 129. Innovation as a core competency (E26) 130. Information management capability (E26) 131. Development of people skills and ability (E26) 132. Advanced Supply Chain management systems (E26) 133. Knowledge management as a core competency (E26) 134. Highly developed communication and collaboration systems (E26) 135. Fast transaction speed (real time) (E26) 136. Fast operational processes (E26) 137. How they do things: • establish strategies to achieve vision • employees believe in the vision (E27) 138. Openness and transparency (E27) 139. Respect for the customer/client (E27) 140. Introduction of new technologies (E29) 141. Enterprise system (E29) 142. IT infrastructure that supports information and process requirements (E30) 143. Access to sufficient local and head office financial resources to fund enterprise wide development (E30) 144. Agility to enable quick response (E30) Nuggets from Responses to Question 5 1. Vision and strategy is communicated throughout enterprise (E1) 2. Staff given autonomy to execute vision and strategy (E1) 3. By considering only the principals of the organisation and not other stakeholders (E2) Appendix A Delphi Study Round One 301 4. Process orientation (E3) 5. Service orientation (E3) 6. Management (E3) 7. Governance (E3) 8. Innovative, open-minded people craft the ideas for moving enterprise forward and able to utilize a flexible organisational system to engage in new activities (E4) 9. Changes in demand, industry structure, and technology are seen by outward focused staff and managers with a strategic view (E5) 10. Encourage staff initiative so they inform managers of possible changes coming (E5) 11. Managers ability to integrate information from different sources, analyse it, and foresee changes (E5) 12. Strong leadership (E5) 13. Open information flow allow managers to keep staff informed of changes ahead (E5) 14. Staff encouraged to use initiative to generate ways to adapt and change (E5) 15. Training and education policies allowing staff to be retrained easily (E5) 16. Flexible HR policies allow staff to change roles easily (E5) 17. Change seen as positive (E5) 18. Staff are encouraged through incentives to accept change (E5) 19. Trust between managers and staff reduce negative outcomes of change (E5) 20. Vision, beliefs and values aligned to the business plan (E6) 21. Measure business plan performance and key milestones by short-term incentives or key objectives (E6) 22. Constantly review alignment of business plans to identify performance gap (E6) 23. Eliminate performance gaps by implementing realignment plans (E6) 24. Research and analysis, reports on external and internal anticipated environment changes (E7) 25. Senior management respond appropriately and take necessary measures to change (E7) 26. Change responses effectively communicated to all levels (E7) 27. High staff commitment, motivation, willingness to adapt (E7) 28. Change management generally initiated through HR acting as the change agent (E7) 29. Customisable ERP system (E7) 30. ERP data warehouse and business intelligence (E7) 31. People require the right: • environment/culture • necessary resources • required ability/skills • opportunity • flexible systems • lack of penalties • provision of rewards to encourage innovation, which is more than just adapting (E9) 29. Systems perspective promulgated to all (E10) 30. Framework for strategy development, implementation and monitoring (E10) 31. Employee buy-in to the organisational system’s purpose and objective (E10) 32. Employees empowered and resourced appropriately to perform role, and fix the system (E10) 33. Empowerment allows for honest, integrated, feedback loops (E10) 34. Measurement system that ensures proper monitoring and control (E10) 35. No blame culture (E10) 36. Feedback that informs (E11) 37. Flexibility allows early change adoption (E11) Appendix A Delphi Study Round One 302 38. Leadership encourages innovation and change (E11) 39. Open scalable technology supports rapid change (E11) 40. External information sensed and shared within the enterprise using dense communication channels (E12) 41. Leadership interpret information and beliefs to develop a new vision and strategy (E12) 42. Leadership communicate vision and strategy persuasively to individuals and teams (E12) 43. Teams analyse and rationalise existing processes, routines and capabilities (E12) 44. Acquire, or redevelop, required capabilities by redeploying workforce and technology (E12) 45. Changes in direction can be ‘leaked’ through inter-enterprise allowing a network-wide response to emerge through: • linkages • explained changes to contracts • new contracts • information and communication between individuals and different Supply Chain/partners (E13) 46. Assess external and internal requirements and the enterprise’s ability to fulfil these requirements (E13) 47. Derive plans and targets to: • drive strategic and operational decisions • manage appropriate responses to competitors (E13) 48. Strategic planning that defines and clarifies enterprise goals (E14) 49. Staff encouraged to think outside the square to help shape the strategy (E14) 50. Effective communication to ensure employees understand expected performance to achieve strategic goals (E14) 51. Positive staff attitude towards change ensures employees are communicating and implementing the plan (E14) 52. Performance measures to monitor how everyone is progressing towards goals (E14) 53. Passion and pride in the enterprise ensures staff perform well (E14) 54. Reliable data ensures performance measures are correctly represented (E14) 55. Trust in management ensures staff are not afraid to report data correctly and they won’t hide mistakes (E14) 56. Correct change methodologies ensure change gets done and implemented (E14) Happy staff that feel valued important to realising change (E14) 57. 'lego-block' modular business model set-up enables a quick change business model (E15) 58. Cross-functional processes hinder functions becoming isolated (E15) 59. Strategy initially sets the direction (E16) 60. Strategically aligned KPIs and incentives to ensure behaviours that realise the strategic goals (E16) 61. Effective communication so everyone understands the strategy, their KPIs and how everything fits together (E16) 62. IS provide necessary performance measurement information (E16) 63. IS facilitate effective communication (E16) 64. Culture that embraces change ensures employees receptive to change, improvements and changes in KPIs when necessary (E16) 65. Fewer adaptive decisions the better (E17) 66. It is the role of senior management to make adaptive decisions (E17) 67. Systems exist that can accurately assist and communicate adaptive decisions to all staff members (E17) 68. Everyone informed and educated as to reasons for decisions (E17) 69. Core structures allow for assessment of enterprise key strengths (E18) Appendix A Delphi Study Round One 303 70. Good leadership and governance to guide how best to adapt (E18) 71. Starts with a demand, fulfilled by the application of capital and encouraged through good will (E19) 72. Starts with initiation of a new employee (E20) 73. Employees inducted via a comprehensive program describing the ethos of the organisation and an individual’s role in the organisational system (E20) 74. Employees gain experience and knowledge of the organisation and the behaviours of peers, which motivates behaviour consistent with roles and the ethos of the organisation (E20) 75. Employees not only told about their work environment but observe the elements in action (E20) 76. Consistency from the CEO down is the organisational model and ethos. (E20) 77. Responsive systems allow the free flow of information between the external environment and within the enterprise (E21) 78. Key people are able to use and share information to make the necessary changes in the processes and products (E21) 79. Resource levels can effect speed of information exchange (E21) 80. Decentralised structure enables different actors, people, customers, suppliers and partners to interact (E22) 81. Activities: • Interaction • Innovation • market facing • change activities • all address business model change (E22) 82. Resources, tools etc. (E22) 83. Competence (E22) 84. Leadership qualities (E22) 85. Decision making (E22) 86. Responsibility (E22) 87. Structure made up of actors, which in turn enable and carry out activities. (E22) 88. The activities are facilitated by resources and competence (E22) 89. Top management continuously scan the environment and market forces such as: • new and emerging technologies • competitive products/services • stakeholders' influences etc. (E23) 90. Top management prepare strategic plan that includes: • Response and adaptation plan • communicate plan to all levels • allocate resources • undertake projects (E23) 91. Strategy, architecture, and program of work management funded by a capital allocation (E24) 92. Processes work together to ensure a mechanism of adaptiveness (E24) 93. Access to capital on a wider scale is essential (E24) 94. Architecture is not IT oriented in many cases (E24) 95. Enterprise is an integrated living system (E25) 96. Information is shared and aligns the different organisation parts that all focused on the same goals (E25) 97. Data to information to analysis enable organisation to learn, establish new directions and new ways of doing things (E25) 98. Leverage tangible and intangible assets by developing organisational capability (E26) Appendix A Delphi Study Round One 304 99. Enhanced capability leads to development of a core competence in positive reaction time to changing environments, which may lead to competitive advantage (E26) 100. Elements of vision, strategy, openness, transparency, respect for the customer work together to create a cohesive environment, participants feel part of the enterprise and believe in the vision (E27) 101. Teamwork is essential (E28) 102. Trust management’s decisions to change things (E28) 103. A continuous improvement culture is necessary (E28) 104. Introduced new technologies to remove all forms of waste (E29) 105. IT system is used to plan, manage work flows and minimise waste (E29) 106. IT system effectively manages technologies and processes to produce only what is demanded (E29) 107. Understand customers’ needs through access to market information (E30) 108. Constant efforts to minimise operating costs to remain competitive (E30) 109. Flexible HR (E30) 110. IT infrastructure to support changing business requirements, deliver operational cost reduction along with process flexibility (E30) 111. Access to capital to fund the necessary organisational changes. Nuggets from Responses to Question 6 1. Managers constantly reassess organisations position in the market/industry (E5) 2. Detect change early and proactively adapt (E5) 3. Continually simplify business processes to enable quick change and adaption (E7) 4. Attract more customers as processes become easier for customers (E7) 5. Most important assets don't show on the balance sheet; they are called people (E10) 6. Culture fosters innovation and openness (E11) 7. Ability of external partners to change (E12) 8. Ability of the firm to locate new effective partners able to complement the firm as it changes (E12) 9. Strong Supply Chains; most product/service value is derived through supply chains (E13) 10. Selection of strong and innovative partners as well as Supply Chain relationship management (E13) 11. Agile Supply Chains (E13) 12. Ability to adapt is present at all levels of the enterprise (E14) 13. To realise vision lower managers and employees must embrace change rather than being scared and resistant to change (E14) 14. Enablers in place to keep the system fluid so change is natural and not a threat (E14) 15. Top managers that believe in change and ready to make big changes to existing business model (E15) 16. Evolved from ‘old school’ to ‘new school’ to something completely different today (E17) 17. Analyse and predict what will be next (E17) 18. External influences force the enterprise to change (E18) 19. Scenario planning to prepare for external and internal environment change (E19) 20. Not all ideas come from the top or research and development (E20) 21. Encourage innovation by making time and resources available to: • try new ideas • develop new procedures • demonstrate different work flows 22. Climate that is open to all ideas, even half-baked ideas, is healthy (E20) Appendix A Delphi Study Round One 305 23. Sub-contractors, wholesalers, sales outlets, and customers all have a role in keeping the enterprise innovative, competitive and healthy (E20) 24. Encourage staff to work outside the enterprise (E20) 25. Encourage external stakeholders to provide input (E20) 26. Embrace every opportunity to understand the commercial environment (E20) 27. Developing strategies and techniques to understand the need to adapt (E20) 28. Understand strategic learning cycles (E24) 29. Find value proposition to gain: • a unique position • competitive advantage • generating superior profits • long term market positions (E24) 30. Enterprises will go through some form of three transition phase: 1) letting go of the ‘old’ 2) entering a ‘neutral zone’ of uncertainty experimentation and trial 3) finding the ‘new’ way forward (E26) 31. Speed at which an enterprise can manage its way through 3-phase transition can be determinant of being truly adaptive (E26) 32. Important characteristics are: • vision • strategy • processes • organisational openness • transparency • respect for customer (E27) 33. Enterprise adaption is continuous (E28) 34. Planning at all enterprise levels is critical although there is also a requirement to be agile (E30) 35. Implementing agile response and change successfully is always challenging (E30) 36. Need to aligned and bring together many different organisational and external elements (E30) Appendix A Delphi Study Round One 306 Appendix A8 Mind Maps How does an enterprise adapt to its Environment? Appendix A Delphi Study Round One 307 Appendix A Delphi Study Round One 308 Appendix A Delphi Study Round One 309 Appendix A Delphi Study Round One 310 Appendix A Delphi Study Round One 311 Appendix A Delphi Study Round One 312 What are the Key CHARACTERISTICS of an adaptive enterprise? Appendix A Delphi Study Round One 313 Appendix A Delphi Study Round One 314 Appendix A Delphi Study Round One 315 Appendix A Delphi Study Round One 316 Appendix A Delphi Study Round One 317 Appendix A Delphi Study Round One 318 What are the core Elements that enable an enterprise to be adaptive? Appendix A Delphi Study Round One 319 Appendix A Delphi Study Round One 320 Appendix A Delphi Study Round One 321 Appendix A Delphi Study Round One 322 Appendix A Delphi Study Round One 323 How do these core ELEMENTS work together to enable an enterprise to be adaptive? Appendix A Delphi Study Round One 324 Appendix A Delphi Study Round One 325 Appendix A Delphi Study Round One 326 Appendix B Delphi Study Round Two 327 Appendix B Delphi Study Round Two Appendix B Delphi Study Round Two 328 Appendix B1 Invitation and Instructions for Pilot Tests Together with my Ph.D. supervisor, Dr. David Sundaram, I am investigating adaptive enterprises (AE) using the Delphi method of inquiry followed by survey research. The multiround Delphi study has been designed mainly for exploratory purposes. It will be administered online with a longitudinal questionnaire to be amended at least twice and I have developed the preliminary first round questionnaire. Delphi Study Procedures The Delphi Study is an iterative process using several rounds of questionnaires to engage with a selected group of experts to generate insights about AE. The participants of the Delphi are all experts in the area of AE, albeit with different backgrounds, and will be grouped into panels representing their primary background. Round One is the initial questionnaire and explores the practices, processes, characteristics and elements of an AE. The findings of Round One will be thematically analysed and become the input to the second round, which involves ranking and rating, according to agreement, by the experts. Round two and the subsequent Delphi rounds will allow the experts to re-evaluate and refine the questionnaire items that did not reach a sufficient level of consensus in the preceding round. Round One Feedback This is the second pilot test for Round One. After considering all the feedback from the first test, the questionnaire was revised accordingly. We would greatly appreciate it if we could arrange a time to meet so you can complete the Round One of the Delphi and provide faceto-faced feedback on any aspect of the revised questionnaire and in particular on the following: • Purpose: do you think the questionnaire will achieve the desired outcomes? • Form: are the questions well-articulated? • Understanding: are the instructions and explanations clear? • Ease of use: are there any problems using the questionnaire? • Are there any important details missing? • Is there anything that should be removed? • Any suggestions for improvement to the email invitation to participants (see below). • Any other comments? For you information, we have included the Participant Information Sheet, which will be provided to the participants who are considered to be experts in the area of AE. Also here is the link to Round One of the Delphi study. We look forward to hearing from you within the next two weeks to arrange a suitable time to meet. Thank you Gabrielle Peko and David Sundaram Appendix B Delphi Study Round Two 329 Appendix B2 Invitation to Participate Dear Participant I would like to invite you to participate in the 2nd round of my Delphi survey on Adaptive Enterprises. This research seeks to define and model an Adaptive Enterprise through a multi-method approach encompassing opinions from industry experts, academics, as well as representatives from governmental and non-governmental organisations. Thanks to the kind support of all participants, many valuable results/insights could be gained from the 1st round. Several months were spent to analyse and sort the given answers. Based on this analysis this 2nd round was designed with the following key purposes in mind: • Propose, refine and evaluate models that are based on the answers given in round 1 • Evaluation and ranking of key themes, pressing issues, and concerns You have been identified as an expert in the field under investigation and I am very interested in your opinions and insights. Please help me by clicking the following URL to access the survey: https://www.surveymonkey.com/s/2FV8RQL The survey will take approximately 20 minutes. I would greatly appreciate it if you could complete the survey by the 10th March. Participation is voluntary and you may decline to take part without giving a reason. All your responses are completely confidential. The Participant Information Sheet is attached to this message. Thank you in advance for your ongoing help and support. If you do have any question about this survey, please email me directly at g.peko@auckland.ac.nz Kind regards Gabrielle Peko ISOM Department The University of Auckland Owen G Glen Building 12 Grafton Road Private Bag 92019 Auckland NEW ZEALAND Appendix B Delphi Study Round Two 330 Appendix B3 Round Two Questionnaire Appendix B Delphi Study Round Two 331 Appendix B Delphi Study Round Two 332 Appendix B Delphi Study Round Two 333 Appendix B Delphi Study Round Two 334 Appendix B Delphi Study Round Two 335 Appendix B Delphi Study Round Two 336 Appendix B Delphi Study Round Two 337 Appendix B Delphi Study Round Two 338 Appendix B Delphi Study Round Two 339 Appendix B Delphi Study Round Two 340 Appendix B Delphi Study Round Two 341 Appendix B Delphi Study Round Two 342 Appendix B Delphi Study Round Two 343 Appendix B Delphi Study Round Two 344 Appendix B Delphi Study Round Two 345 Appendix B Delphi Study Round Two 346 Appendix B Delphi Study Round Two 347 Appendix B Delphi Study Round Two 348 Appendix B Delphi Study Round Two 349 Appendix B Delphi Study Round Two 350 Appendix B Delphi Study Round Two 351 Appendix B Delphi Study Round Two 352 Appendix B Delphi Study Round Two 353 Appendix B Delphi Study Round Two 354 Appendix B4 Ratings and Consensus Analysis Qn No Question Mean SD 51% 80% Cons ensus 1 Establishes strategies by planning and review that are clearly articulated to achieve the organisation's vision. 4.04 1.00 39% 75% N 2 Has clear strategic goals, objectives and focus (business outcomes that meet stakeholder or shareholder expectations). 4.32 0.90 54% 86% Y 3 Has enterprise-wide development of strategy that incorporates more than just senior management's trusted strategies and stated goals. 4.11 0.69 54% 82% Y 4 Has a purely proactive approach to strategy development that anticipates environmental changes and potentially leads the market in innovation and practices. 3.54 1.07 39% 64% N 5 Has a purely reactive approach to strategy development where incremental changes are used to meet current and short-term future requirements. 2.68 1.12 32% 64% N 6 Has both proactive and reactive approaches to strategy development (evaluating if need to innovate, need to follow). 4.14 0.89 43% 75% N 7 Has adaptation processes that are strategically planned and designed. 3.82 1.02 46% 71% N 8 Has a shared organisational vision with tangible outcomes. 4.32 0.82 50% 86% Y 9 Has effective leadership to realise opportunities. 4.71 0.46 71% 100% Y 10 Has 'old school' command and control management structures. 1.89 0.96 46% 68% N 11 Has 'new school' participative and collaborative management structures. 4.04 0.84 54% 82% Y 12 Has a decentralised structure that allows the organisation's functions to interact with the external environment. 3.79 0.74 54% 82% Y 13 Has regional adaptation that is delegated but head office monitors for control. 3.79 0.88 50% 79% N 14 Has senior management who are sensitive to environmental changes. 4.46 0.74 57% 93% Y 15 Has organisational freedom to innovate. 4.46 0.69 57% 89% Y 16 Has adaptation parameters that are delegated to all management levels with the authority to adapt. 4.29 0.71 43% 86% Y 17 Has an organisational culture that embraces change. 4.57 0.57 61% 96% Y 18 Has an open culture that seeks and trusts the opinions of organisational participants. 4.36 0.73 50% 86% Y 19 Has macro level restructuring, creation of roles and rules at the meso (middle) level, and mechanisms in place to support adaptation at the micro level. 3.43 1.20 29% 57% N 20 Has business process management (BPM) that strategically plans and designs cross-functional processes. 3.89 0.88 46% 71% N 21 Has processes that are accessible and transparent (process visibility). 4.25 0.75 43% 82% Y 22 Has a shared understanding and knowledge among all enterprise participants about processes in relation to business requirements. 4.07 0.77 43% 75% N 23 Has the capability to change its processes in order for the enterprise to adapt. 4.54 0.69 61% 96% Y Appendix B Delphi Study Round Two 355 24 Has streamlined processes to remove costs and provide value to customers. 4.18 0.86 46% 86% Y 25 Has efficient, standardised processes at the operational level. 4.04 0.88 46% 82% Y 26 Has flexibility in the way it conducts its core business. 4.18 0.90 46% 86% Y 27 Has intra-firm (within firm) processes that give access to resources and capabilities to be utilised as and when required. 4.07 0.72 50% 79% N 28 Has inter-firm (between firms) processes that give access to resources and capabilities to be utilised as and when required. 3.93 0.86 50% 75% N 29 Has the requisite information and communication technology (ICT) infrastructure to be able to respond to changing demands. 4.64 0.56 68% 96% Y 30 Has integrated enterprise-wide information systems and data that is accessible by all employees. 4.18 0.98 54% 68% N 31 Has both internal and external reliable, real time data. 4.32 0.72 46% 86% Y 32 Has advanced business network systems such as a Supply Chain Management system. 4.07 0.86 50% 82% Y 33 Has externally orientated knowledge and information seeking and capture activities. 4.18 0.86 43% 79% N 34 Has information and knowledge management as a capability. 4.25 0.75 50% 89% Y 35 Is consistently an early adopter of technology. 3.18 0.90 39% 68% N 36 Has the requirement that information technology investments must have a supporting information management plan. 3.96 0.84 43% 71% N 37 The structural elements help an enterprise to be adaptive. 4.11 0.63 61% 86% Y 38 The behavioural flows help an enterprise to be adaptive. 4.21 0.83 57% 93% Y 39 Monitor internal and external environment to detect changes that are then captured, shared and subsumed by the enterprise. 4.43 0.69 54% 89% Y 40 Performance Measures (KPIs) which enable monitoring and incentives that encourage responsiveness. 4.07 0.86 50% 82% Y 41 Analysis and reporting on the internal and external environment of the organisation. 3.93 0.66 57% 82% Y 42 Formal and informal, vertical and horizontal, internal and external communication networks. 4.54 0.58 57% 96% Y 43 Flexible policies for each part of the organisation that allow it to change. 4.11 0.96 39% 79% N 44 Broad parameters of expected behaviours rather than rules. 4.04 0.69 54% 79% Y 45 Decision making freedom according to role and responsibility. 4.54 0.64 61% 93% Y 46 Staff are given autonomy to make changes as required. 3.68 0.82 46% 79% N 47 Decisions are enforced through financial and managerial sign offs. 3.04 1.00 36% 64% N 48 Encouragement for risk-taking behaviour in new ventures. 3.57 0.92 50% 79% N 49 Support systems to encourage experimentation. 3.96 1.04 39% 68% N 50 Skilled workforce that can be redeployed. 4.18 0.82 43% 82% Y 51 Liberal human resource policy to allow for staff rationalisation. 3.29 1.05 43% 71% N Appendix B Delphi Study Round Two 356 52 Link and align enterprise architecture (processes and IT infrastructure) and work programmes with the organisation's strategy. 4.14 0.80 39% 75% N 53 Have a system’s perspective to manage change while taking into account the organisation's capabilities. 4.25 0.75 43% 82% Y 54 Bring about incremental change through realignment of strategy, resources, processes and technologies. 3.82 0.77 50% 79% N 55 People have an organisation-wide understanding in regard to the work they perform. 4.00 0.77 54% 79% Y 56 Review of performance and implementation of action plans that align with desired business outcomes. 3.93 0.86 50% 75% N 57 A network view of value creation since an enterprise does not operate in isolation. 4.18 0.67 54% 86% Y 58 Careful selection of trading partners. 4.04 0.88 46% 79% N 59 Appropriate tools and methodologies to implement change. 4.32 0.77 46% 89% Y 60 Staff have access to appropriate, real-time data. 4.29 0.71 43% 86% Y 61 Proactively explore new options and technologies to construct systems for a future environment. 4.11 0.83 43% 79% N 62 Leverage business intelligence to form future scenarios that are not obvious based on current business trajectory. 4.14 1.01 46% 75% N 63 A culture that engenders employee engagement, motivation, and pride in the organisation. 4.64 0.68 71% 96% Y 64 Leaders constantly encourage change rather than having a 'business-as-usual' mentality. 4.21 0.74 43% 82% Y 65 Leaders have persuasive skills to ensure employees can be ‘brought on board’ with the change. 4.00 1.15 43% 75% N 66 Employees who embrace change. 4.16 0.75 43% 82% Y 67 Employees are proactively involved in organisational change. 4.39 0.57 54% 96% Y 68 People have the ability, and power and authority, to make and implement decisions. 4.21 0.63 57% 89% Y 69 Political adeptness to know when to use qualified support such as consultants to achieve outcomes beneficial to the organisation. 3.64 0.83 43% 79% N 70 Lack of understanding about environment. 4.50 0.64 57% 93% Y 71 Limited by only focussing on the current business environment. 3.89 0.74 46% 79% N 72 Lack of influence over external environmental factors. 2.93 1.02 43% 68% N 73 Over regulated and restrictive internal and external environments. 4.11 0.88 39% 75% N 74 Lack of required resources to implement change. 3.93 0.90 32% 64% N 75 Insufficient and misdirected capital expenditure. 4.00 0.98 39% 75% N 76 Non-strategic rationalisation of staff and equipment. 4.04 0.84 43% 75% N 77 Reactive HR policies such as staff layoffs or limiting incentives in times of change. 4.00 0.94 46% 79% N 78 Not connected to effective business networks such as having a fractured supply chain. 4.04 0.92 39% 75% N 79 Lack of general understanding of the business processes, routines and capabilities. 4.07 0.90 43% 79% N 80 Insufficient accountability and ownership. 4.25 0.97 54% 79% Y 81 Incomprehensive risk management. 3.75 0.84 57% 75% N Appendix B Delphi Study Round Two 357 82 Ruthless business practices. 3.54 1.14 46% 64% N 83 A shared living vision. 4.11 0.99 43% 75% N 84 Core organisational values and beliefs that influence behaviours. 4.29 0.66 50% 89% Y 85 Sustainable business practices. 3.75 1.04 57% 75% N 86 Ethical business practices. 3.93 1.15 39% 75% N 87 Compliant and operates in the spirit of the law. 3.79 1.10 64% 82% Y 88 Engenders trust between internal and external stakeholders. 4.29 0.94 50% 86% Y 89 Customer focused. 4.32 0.94 54% 86% Y 90 Strategic – understands the “big picture”. 4.32 0.72 50% 93% Y 91 Visionary leader willing to take risks and lead organisation in new direction. 4.39 0.69 50% 89% Y 92 Strong leadership team. 4.36 0.68 46% 89% Y 93 Robust management that involves both a risk and evaluative approach. 4.11 0.79 39% 75% N 94 Good governance. 4.14 0.85 39% 79% N 95 Innovative. 4.36 0.56 57% 96% Y 96 The organisation constantly learns and transfers knowledge. 4.64 0.56 68% 96% Y 97 A lean organisational philosophy. 3.54 0.74 57% 86% Y 98 Empowerment that supports change. 4.25 0.65 54% 89% Y 99 Constant awareness of internal and external environments. 4.11 0.74 46% 79% N 100 Aware and open about the organisation's strengths and weaknesses. 4.11 0.69 54% 82% Y 101 Possesses capabilities and core competencies that are unique. 3.46 1.07 39% 71% N 102 Ability to change as required. 4.46 0.74 57% 93% Y 103 Agile organisation that responds quickly and effectively to change. 4.50 0.51 50% 100% Y 104 Maintains robust, collaborative, stakeholder relationships. 4.18 0.67 54% 86% Y 105 Promotes and maintains advantageous business networks. 4.11 0.63 61% 86% Y 106 The enterprise is an holistic, integrated, system. 4.29 0.76 46% 89% Y 107 Which approach when planning, developing, and managing STRATEGY is best for an Adaptive Enterprise? 3.07 0.47 79% 93% Y 108 Which approach when planning, developing, and managing ORGANISATION is best for an Adaptive Enterprise? 2.96 0.33 89% 96% Y 109 Which approach when planning, developing, and managing PROCESS is best for an Adaptive Enterprise? 2.71 0.85 64% 82% Y 110 Which approach when planning, developing, and managing INFORMATION is best for an Adaptive Enterprise? 3.07 0.38 86% 96% Y 111 Could this transformation cycle enable an enterprise to transform and become adaptive? 4.04 0.58 68% 86% Y 112 Do you agree with the relationship between the components of the Adaptive Enterprise Transformation Cycle? 4.00 0.61 64% 82% Y Appendix C Delphi Study Round Three 358 Appendix C Delphi Study Round Three Appendix C Delphi Study Round Three 359 Appendix C1 Invitation to Participate Dear Participant I would like to invite you to participate in the 3rd and final round of my Delphi study on Adaptive Enterprises (AE). This research seeks to define and model AE through a multimethod approach encompassing opinions from industry experts, academics, as well as representatives from governmental and non-governmental organisations. Thanks to the kind support of all participants, many valuable results/insights could be gained from the 2nd round. Several weeks were spent to analyse and sort the given answers. Based on this analysis this 3rd round was designed with the following key purpose in mind: • Re-rating and re-evaluation of AE key themes/aspects where consensus was not achieved in round 2. You have been identified as an expert in the field under investigation and I am very interested in your opinions and insights. Please use the following URL to access the survey: https://www.surveymonkey.com/s/HCNHKQL It will only take 10 - 15 minutes. All your responses are completely confidential. The Participant Information Sheet is attached for your information. Thank you in advance for your ongoing help and support, I greatly appreciate it. If you do have any question about this survey, please feel free to contact me at g.peko@auckland.ac.nz Kind regards Gabrielle Peko ISOM Department The University of Auckland Owen G Glen Building 12 Grafton Road Private Bag 92019 Auckland Appendix C Delphi Study Round Three 360 Appendix C2 Round Three Questionnaire Appendix C Delphi Study Round Three 361 Appendix C Delphi Study Round Three 362 Appendix C Delphi Study Round Three 363 Appendix C Delphi Study Round Three 364 Appendix C Delphi Study Round Three 365 Appendix C Delphi Study Round Three 366 Appendix C Delphi Study Round Three 367 Appendix C Delphi Study Round Three 368 Appendix C Delphi Study Round Three 369 Appendix C Delphi Study Round Three 370 Appendix C Delphi Study Round Three 371 Appendix C Delphi Study Round Three 372 Appendix C Delphi Study Round Three 373 Appendix C3 Ratings and Consensus Analysis Qn No Round 2 Qn No Round 3 Question Mean SD 51% 80% Cons ensus 1 1 Establishes strategies by planning and review that are clearly articulated to achieve the organisation's vision. (Mean Rating = 4.04; Standard Deviation = 1.00) 4.43 0.69 50% 96% Y 4 2 Has BOTH proactive and reactive approaches to strategy development (evaluating if need to innovate, need to follow). (Mean Rating = 4.14; Standard Deviation = 0.89) 4.36 0.62 50% 93% Y 5 3 Has ONLY a proactive approach to strategy development that anticipates environmental changes and potentially leads the market in innovation and practices. (Mean Rating = 3.54; Standard Deviation = 1.07) 2.46 0.84 43% 79% N 6 4 Has ONLY a reactive approach to strategy development where incremental changes are used to meet current and short-term future requirements. (Mean Rating = 2.68; Standard Deviation = 1.12) 2.07 0.94 54% 79% Y 7 5 Has strategically planned processes for adaptation that are implemented throughout the organisation. (Mean Rating = 3.82; Standard Deviation = 1.02) 4.04 0.64 61% 82% Y 10 6 Has 'old school' command and control management structures. (Mean Rating = 1.89; Standard Deviation = 0.96) 1.54 0.64 54% 93% Y 13 7 Has regional adaptation that is delegated but head office monitors for control. (Mean Rating = 3.79; Standard Deviation = 0.88) 3.75 0.84 64% 82% Y 19 8 Restructuring at the macro level is enabled by the micro level creation of roles and rules, and the mechanisms in place to support the change. (Mean Rating = 3.43; Standard Deviation = 1.20) 3.61 0.74 54% 86% Y 20 9 Has business process management (BPM) that strategically plans and designs cross-functional processes. (Mean Rating = 3.89; Standard Deviation = 0.88) 4.21 0.83 57% 93% Y 22 10 Has a shared understanding and knowledge among all enterprise participants about processes in relation to business requirements. (Mean Rating = 4.07; Standard Deviation = 0.77) 4.32 0.61 54% 93% Y 27 11 Has intra-firm (within firm) processes that give access to resources and capabilities to be utilised as and when required. (Mean Rating = 4.07; Standard Deviation = 0.72) 4.32 0.55 61% 96% Y 28 12 Has inter-firm (between firms) processes that give access to resources and capabilities to be utilised as and when required. (Mean Rating = 3.93; Standard Deviation = 0.86) 3.96 0.79 61% 82% Y 30 13 Has enterprise-wide integrated information systems and data that employees can access to 4.75 0.44 75% 100% Y Appendix C Delphi Study Round Three 374 perform their roles. (Mean Rating = 4.18; Standard Deviation = 0.98) 33 14 Has externally orientated knowledge and information seeking and capture activities. (Mean Rating = 4.18; Standard Deviation = 0.86) 4.39 0.63 46% 93% Y 35 15 Is consistently an early adopter of technology. (Mean Rating = 3.18; Standard Deviation = 0.90) 3.21 0.74 64% 89% Y 36 16 Has the requirement that information technology investments must have a supporting information management plan. (Mean Rating = 3.96; Standard Deviation = 0.84) 4.11 0.69 64% 89% Y 43 17 Flexible policies for each part of the organisation that allow it to change. (Mean Rating = 4.11; Standard Deviation = 0.96) 4.29 0.46 71% 100% Y 46 18 Staff are given autonomy to make necessary changes when required. (Mean Rating = 3.68; Standard Deviation = 0.82) 3.93 0.66 57% 82% Y 47 19 Strategic objectives are enabled by financial and managerial sign offs. (Mean Rating = 3.04; Standard Deviation = 1.00) 3.43 0.92 46% 79% N 48 20 Encouragement for risk-taking behaviour in new ventures. (Mean Rating = 3.57; Standard Deviation = 0.92) 3.82 0.77 50% 79% N 49 21 Support systems to encourage experimentation. (Mean Rating = 3.96; Standard Deviation = 1.04) 4.21 0.69 50% 86% Y 51 22 Liberal human resource policy to allow for staff rationalisation. (Mean Rating = 3.29; Standard Deviation = 1.05) 3.50 0.75 43% 86% Y 52 23 Link and align enterprise architecture (processes and IT infrastructure) and work programmes with the organisation's strategy. (Mean Rating = 4.14; Standard Deviation = 0.80) 4.32 0.77 46% 89% Y 54 24 Bring about incremental change through realignment of strategy, resources, processes and technologies. (Mean Rating = 3.82; Standard Deviation = 0.77) 4.00 0.77 43% 71% N 56 25 Review of performance and implementation of action plans that align with desired business outcomes. (Mean Rating = 3.93; Standard Deviation = 0.86) 4.25 0.65 64% 96% Y 58 26 Careful selection of trading partners. (Mean Rating = 4.04; Standard Deviation = 0.88) 4.18 0.55 68% 93% Y 61 27 Proactively explore new options and technologies to construct systems for a future environment. (Mean Rating = 4.11; Standard Deviation = 0.83) 4.29 0.53 64% 96% Y 62 28 Leverage business intelligence to form future scenarios that are not obvious based on current business trajectory. (Mean Rating = 4.14; Standard Deviation = 1.01) 4.18 0.94 50% 89% Y 65 29 Leaders have persuasive skills to ensure employees can be ‘brought on board’ with the change. (Mean Rating = 4.00; Standard Deviation = 1.15) 4.54 0.51 54% 100% Y 69 30 Political adeptness to know when to use qualified support such as consultants to achieve outcomes 3.89 0.63 61% 86% Y Appendix C Delphi Study Round Three 375 beneficial to the organisation. (Mean Rating = 3.64; Standard Deviation = 0.83) 71 31 Limited by only focussing on the current business environment. (Mean Rating = 3.89; Standard Deviation = 0.74) 4.14 0.65 57% 86% Y 72 32 Lack of influence over external environment factors. (Mean Rating = 2.93; Standard Deviation = 1.02) 2.89 0.88 36% 68% N 73 33 Over regulated and restrictive internal and external environments. (Mean Rating = 4.11; Standard Deviation = 0.88) 4.36 0.78 50% 89% Y 74 34 Lack of required resources to implement change. (Mean Rating = 3.93; Standard Deviation = 0.90) 4.25 0.59 61% 93% Y 75 35 Insufficient and misdirected capital expenditure. (Mean Rating = 4.00; Standard Deviation = 0.98) 4.25 0.52 68% 96% Y 76 36 Non-strategic rationalisation of staff and equipment. (Mean Rating = 4.04; Standard Deviation = 0.84) 4.29 0.66 50% 89% Y 77 37 Reactive HR policies such as staff layoffs or limiting incentives in times of change. (Mean Rating = 4.00; Standard Deviation = 0.94) 4.14 0.65 57% 86% Y 78 38 Not connected to effective business networks such as having a fractured supply chain. (Mean Rating = 4.04; Standard Deviation = 0.92) 4.25 0.59 61% 93% Y 79 39 Lack of general understanding of the business processes, routines and capabilities. (Mean Rating = 4.07; Standard Deviation = 0.90) 4.39 0.74 50% 93% Y 81 40 Lack of a comprehensive risk management strategy. (Mean Rating = 3.75; Standard Deviation = 0.84) 3.75 0.70 57% 86% Y 82 41 Ruthless business practices. (Mean Rating = 3.54; Standard Deviation = 1.14) 3.61 1.23 32% 61% N 83 42 A shared living vision. (Mean Rating = 4.11; Standard Deviation = 0.99) 4.46 0.64 54% 93% Y 85 43 Sustainable business practices. (Mean Rating = 3.75; Standard Deviation = 1.04) 3.89 0.74 68% 82% Y 86 44 Ethical business practices. (Mean Rating = 3.93; Standard Deviation = 1.15) 4.00 0.94 54% 82% Y 93 45 Robust management that involves both a risk and evaluative approach. (Mean Rating = 4.11; Standard Deviation = 0.79) 4.36 0.49 64% 100% Y 94 46 Good governance. (Mean Rating = 4.14; Standard Deviation = 0.85) 4.54 0.58 57% 96% Y 99 47 Constant awareness of internal and external environments. (Mean Rating = 4.11; Standard Deviation = 0.74) 4.57 0.57 61% 96% Y 101 48 Possesses capabilities and core competencies that are unique. (Mean Rating = 3.46; Standard Deviation = 1.07) 3.57 0.88 39% 75% N Appendix D Survey Questionnaire 376 Appendix D Survey Questionnaire Appendix D Survey Questionnaire 377 Appendix D1 Participant Information Sheet Appendix D Survey Questionnaire 378 Appendix D2 Pilot Test Invitation and Instructions Dear Tester Thank you for you for supporting this research project on Adaptive Enterprises. Your input as a pilot tester will help to improve the survey design and make it as straightforward as possible for the respondents given the survey objectives. Study Background: We have conducted an exploratory Delphi study on Adaptive Enterprises (AE). We gained good insights from this Delphi study and designed several models based on these results. To further enhance our understanding of AE we have designed a survey which is meant to confirm the results that we gained through the Delphi and validate the applicability of the developed models. We will be targeting practitioners from various backgrounds, i.e. respondents should base their answers on the experiences/impressions of their enterprise. Based on the assumptions behind our models, the key purposes of this survey are as follows: 1. Evaluation of the key components of an Adaptive Enterprise. 2. Evaluation of the key behavioural elements of an Adaptive Enterprise. 3. Validate the models which resulted from the Delphi study. Your Feedback: We would greatly appreciate it if you could test the survey design and provide us with some feedback on the following aspects: • Style: Are the questions framed/phrased in an appropriate fashion? • Understanding: Do you see any issues with the way the questions and explanations are designed? • Understanding: Do you see any issues with the way the answer possibilities are designed? • Understanding: Do you think that most practitioners will be able to understand and successfully complete the survey? • Are there any important details that you think are missing and should be included? • Are there any aspect that you believe should not be included keeping in mind the purpose of this survey? • Evaluation: Do you think this survey will enable us to evaluate the adaptiveness of an enterprise? • Do you think that our estimate of 15 minutes to complete the survey is reasonable? Thank you for your help and support it is greatly appreciated. Gabrielle Peko Appendix D Survey Questionnaire 379 Appendix D3 Survey Questionnaire Appendix D Survey Questionnaire 380 Appendix D Survey Questionnaire 381 Appendix D Survey Questionnaire 382 Appendix D Survey Questionnaire 383 Appendix D Survey Questionnaire 384 Appendix D Survey Questionnaire 385 Appendix D Survey Questionnaire 386 Appendix D Survey Questionnaire 387 Appendix D Survey Questionnaire 388 Appendix D Survey Questionnaire 389 Appendix D Survey Questionnaire 390 Appendix D Survey Questionnaire 391 Appendix D Survey Questionnaire 392 Appendix D Survey Questionnaire 393 Appendix D Survey Questionnaire 394 Appendix D4 Descriptive Statistics Analysis Question N Mean Median Std. Deviation Skewness Std. Error of Skewness Kurtosis Std. Error of Kurtosis Valid Missing q0008_0001 499 0 5.2004 5.0000 1.60193 -.745 .109 -.037 .218 q0008_0002 499 0 5.3747 6.0000 1.54122 -.974 .109 .472 .218 q0008_0003 499 0 5.1984 5.0000 1.55963 -.849 .109 .314 .218 q0008_0004 499 0 5.3427 6.0000 1.43423 -.908 .109 .557 .218 q0008_0005 499 0 5.2585 6.0000 1.52730 -.825 .109 .206 .218 q0009_0001 499 0 3.9499 4.0000 1.81568 -.073 .109 -.818 .218 q0010_0001 499 0 3.8737 4.0000 1.85360 -.040 .109 -.966 .218 q0011_0001 499 0 4.2766 4.0000 1.77547 -.257 .109 -.676 .218 q0012_0001 499 0 3.9299 4.0000 1.86306 .002 .109 -.957 .218 q0013_0001 499 0 3.9218 4.0000 1.93153 -.055 .109 -1.078 .218 q0014_0001 499 0 4.2244 4.0000 1.64211 -.227 .109 -.471 .218 q0015_0001 499 0 4.9118 5.0000 1.52497 -.523 .109 -.131 .218 q0016_0001 499 0 4.6613 5.0000 1.57016 -.363 .109 -.231 .218 q0017_0001 499 0 3.8577 4.0000 1.67122 -.030 .109 -.628 .218 q0018_0001 499 0 3.8337 4.0000 1.74199 .099 .109 -.879 .218 q0019_0001 499 0 4.9118 5.0000 1.50242 -.559 .109 .136 .218 q0020_0001 499 0 4.8016 5.0000 1.45298 -.404 .109 -.092 .218 q0021_0001 499 0 4.9178 5.0000 1.37945 -.276 .109 -.120 .218 q0022_0001 499 0 5.0261 5.0000 1.33734 -.387 .109 -.098 .218 q0023_0001 499 0 5.0661 5.0000 1.33445 -.340 .109 -.019 .218 q0024_0001 499 0 5.1703 5.0000 1.48115 -.764 .109 .257 .218 q0025_0001 499 0 4.8677 5.0000 1.43624 -.400 .109 -.230 .218 q0025_0002 499 0 4.8156 5.0000 1.45690 -.424 .109 -.103 .218 q0025_0003 499 0 4.9399 5.0000 1.42848 -.529 .109 -.086 .218 q0026_0001 499 0 5.1343 5.0000 1.51034 -.795 .109 .258 .218 q0027_0001 499 0 4.5210 5.0000 1.66015 -.394 .109 -.628 .218 q0028_0001 499 0 5.2545 5.0000 1.38166 -.597 .109 .035 .218 q0028_0002 499 0 5.1924 5.0000 1.38518 -.621 .109 .016 .218 q0028_0003 499 0 5.3587 6.0000 1.33744 -.702 .109 .139 .218 q0028_0004 499 0 5.3407 5.0000 1.29727 -.658 .109 .180 .218 q0029_0001 499 0 5.3186 5.0000 1.33258 -.854 .109 .842 .218 q0030_0001 499 0 5.4269 6.0000 1.28169 -.888 .109 .929 .218 q0031_0001 499 0 5.6934 6.0000 1.14605 -.834 .109 .949 .218 q0032_0001 499 0 4.8116 5.0000 1.48911 -.484 .109 -.027 .218 q0032_0002 499 0 4.8557 5.0000 1.51539 -.564 .109 -.093 .218 q0032_0003 499 0 4.8737 5.0000 1.56329 -.613 .109 .002 .218 q0033_0001 499 0 5.2766 5.0000 1.33522 -.775 .109 .821 .218 q0034_0001 499 0 4.6754 5.0000 1.62893 -.514 .109 -.375 .218 q0035_0001 499 0 4.9679 5.0000 1.47366 -.375 .109 -.323 .218 q0035_0002 499 0 5.0741 5.0000 1.43413 -.557 .109 -.107 .218 q0035_0003 499 0 5.2004 5.0000 1.41276 -.663 .109 .223 .218 q0035_0004 499 0 5.1283 5.0000 1.46155 -.483 .109 -.337 .218 q0036_0001 499 0 5.3367 5.0000 1.31063 -.802 .109 .796 .218 q0037_0001 499 0 5.3407 5.0000 1.34290 -.834 .109 .894 .218 q0038_0001 499 0 5.2886 5.0000 1.38585 -.827 .109 .619 .218 q0039_0001 499 0 5.1643 5.0000 1.36253 -.697 .109 .467 .218 q0040_0001 499 0 5.3687 6.0000 1.33846 -.772 .109 .511 .218 Appendix D Survey Questionnaire 395 Appendix D Survey Questionnaire 396 Appendix D Survey Questionnaire 397 Appendix D Survey Questionnaire 398 Appendix D Survey Questionnaire 399 Appendix D Survey Questionnaire 400 Appendix D Survey Questionnaire 401 Appendix D Survey Questionnaire 402 Appendix D Survey Questionnaire 403 Appendix D Survey Questionnaire 404 Appendix D Survey Questionnaire 405 Appendix D Survey Questionnaire 406 Appendix D Survey Questionnaire 407 Appendix D Survey Questionnaire 408 Appendix D Survey Questionnaire 409 Appendix D Survey Questionnaire 410 Appendix D Survey Questionnaire 411 Appendix D Survey Questionnaire 412 Appendix D Survey Questionnaire 413 Appendix D Survey Questionnaire 414 Appendix D Survey Questionnaire 415 Appendix D Survey Questionnaire 416 Appendix D Survey Questionnaire 417 Appendix D Survey Questionnaire 418 Appendix E Survey Factor Analysis 419 Appendix E Survey Factor Analysis Appendix E Survey Factor Analysis 420 Appendix E1 EFA Factor Analysis EFA Assessment AZM Model EFA 1 library(nFactors) ## Loading required package: MASS ## Loading required package: psych ## Loading required package: boot ## ## Attaching package: 'boot' ## ## The following object is masked from 'package:psych': ## ## logit ## ## Loading required package: lattice ## ## Attaching package: 'lattice' ## ## The following object is masked from 'package:boot': ## ## melanoma ## ## ## Attaching package: 'nFactors' ## ## The following object is masked from 'package:lattice': ## ## parallel ev <- eigen(cor(d0[,c(2:22, 33, 34, 45:48)])) # get eigenvalues ev$values ## [1] 10.9214321 3.1645293 1.7321922 1.1532459 0.9600175 0.8882057 0.6992090 0.6696792 0.6650709 0.5674671 0.5295532 0.4905007 0.4607548 0.4122850 0.3982344 0.3864809 0.3556667 ## [18] 0.3357024 0.3158927 0.3065187 0.2698532 0.2641333 0.2565193 0.2403554 0.2067286 0.1824370 0.1673351 nS <- nScree(x=ev$values) plotnScree(nS) Appendix E Survey Factor Analysis 421 names(d0)[c(2:22, 33, 34, 45:48)] ## [1] "q08_01" "q08_02" "q08_03" "q08_04" "q08_05" "q09_01" "q10_01" "q11_01" "q12_01" "q13_01" "q14_01" "q15_01" "q16_01" "q17_01" "q18_01" "q19_01" "q20_01" "q21_01" "q22_01" "q23_01" "q24_01" ## [22] "q30_01" "q31_01" "q37_01" "q38_01" "q39_01" "q40_01" fit <- factanal(d0[,c(2:22, 33, 34, 45:48)], factor = 4, rotation = "promax") print(fit, digits=2, cutoff=.30, sort=TRUE) ## ## Call: ## factanal(x = d0[, c(2:22, 33, 34, 45:48)], factors = 4, rotation = "promax") ## ## Uniquenesses: ## q08_01 q08_02 q08_03 q08_04 q08_05 q09_01 q10_01 q11_01 q12_01 q13_01 q14_01 q15_01 q16_01 q17_01 q18_01 q19_01 q20_01 q21_01 q22_01 q23_01 q24_01 q30_01 q31_01 q37_01 q38_01 q39_01 q40_01 ## 0.30 0.34 0.36 0.30 0.30 0.51 0.41 0.65 0.44 0.38 0.59 0.57 0.78 0.76 0.74 0.58 0.44 0.28 0.26 0.28 0.47 0.42 0.74 0.27 0.24 0.26 0.18 ## ## Loadings: Appendix E Survey Factor Analysis 422 ## Factor1 Factor2 Factor3 Factor4 ## q15_01 0.56 ## q19_01 0.64 ## q20_01 0.77 ## q21_01 0.81 ## q22_01 0.93 ## q23_01 0.89 ## q09_01 0.67 ## q10_01 0.77 ## q11_01 0.59 ## q12_01 0.75 ## q13_01 0.80 ## q14_01 0.54 ## q17_01 0.54 ## q08_01 0.74 ## q08_02 0.82 ## q08_03 0.83 ## q08_04 0.82 ## q08_05 0.80 ## q37_01 0.69 ## q38_01 0.66 ## q39_01 0.64 ## q40_01 0.79 ## q16_01 0.32 ## q18_01 0.49 ## q24_01 0.30 0.34 ## q30_01 0.50 ## q31_01 0.37 ## ## Factor1 Factor2 Factor3 Factor4 ## SS loadings 4.28 3.55 3.33 2.54 ## Proportion Var 0.16 0.13 0.12 0.09 ## Cumulative Var 0.16 0.29 0.41 0.51 ## ## Factor Correlations: ## Factor1 Factor2 Factor3 Factor4 ## Factor1 1.00 0.45 0.63 0.74 ## Factor2 0.45 1.00 0.37 0.27 ## Factor3 0.63 0.37 1.00 0.65 ## Factor4 0.74 0.27 0.65 1.00 ## ## Test of the hypothesis that 4 factors are sufficient. ## The chi square statistic is 637.61 on 249 degrees of freedom. ## The p-value is 6.46e-36 names(d0[,c(2:13, 15, 17:22, 33, 45:48)]) Appendix E Survey Factor Analysis 423 ## [1] "q08_01" "q08_02" "q08_03" "q08_04" "q08_05" "q09_01" "q10_01" "q11_01" "q12_01" "q13_01" "q14_01" "q15_01" "q17_01" "q19_01" "q20_01" "q21_01" "q22_01" "q23_01" "q24_01" "q30_01" "q37_01" ## [22] "q38_01" "q39_01" "q40_01" fit <- factanal(d0[,c(2:13, 15, 17:22, 33, 45:48)], factor = 4, rotation = "promax") print(fit, digits=2, cutoff=.30, sort=TRUE) ## ## Call: ## factanal(x = d0[, c(2:13, 15, 17:22, 33, 45:48)], factors = 4, rotation = "promax") ## ## Uniquenesses: ## q08_01 q08_02 q08_03 q08_04 q08_05 q09_01 q10_01 q11_01 q12_01 q13_01 q14_01 q15_01 q17_01 q19_01 q20_01 q21_01 q22_01 q23_01 q24_01 q30_01 q37_01 q38_01 q39_01 q40_01 ## 0.29 0.34 0.36 0.30 0.30 0.53 0.41 0.64 0.41 0.35 0.61 0.57 0.81 0.59 0.44 0.28 0.25 0.28 0.48 0.42 0.26 0.24 0.26 0.18 ## ## Loadings: ## Factor1 Factor2 Factor3 Factor4 ## q15_01 0.53 ## q19_01 0.60 ## q20_01 0.77 ## q21_01 0.77 ## q22_01 0.92 ## q23_01 0.86 ## q08_01 0.74 ## q08_02 0.82 ## q08_03 0.83 ## q08_04 0.82 ## q08_05 0.80 ## q09_01 0.66 ## q10_01 0.78 ## q11_01 0.60 ## q12_01 0.77 ## q13_01 0.83 ## q14_01 0.53 ## q30_01 0.59 ## q37_01 0.80 ## q38_01 0.74 ## q39_01 0.71 ## q40_01 0.89 ## q17_01 0.49 ## q24_01 0.40 ## ## Factor1 Factor2 Factor3 Factor4 ## SS loadings 3.67 3.30 3.25 3.08 ## Proportion Var 0.15 0.14 0.14 0.13 ## Cumulative Var 0.15 0.29 0.43 0.55 ## Appendix E Survey Factor Analysis 424 ## Factor Correlations: ## Factor1 Factor2 Factor3 Factor4 ## Factor1 1.00 -0.41 0.69 -0.78 ## Factor2 -0.41 1.00 -0.37 0.47 ## Factor3 0.69 -0.37 1.00 -0.62 ## Factor4 -0.78 0.47 -0.62 1.00 ## ## Test of the hypothesis that 4 factors are sufficient. ## The chi square statistic is 423.89 on 186 degrees of freedom. ## The p-value is 1.31e-20 BFM Model EFA 1 library(nFactors) ## Loading required package: MASS ## Loading required package: psych ## Loading required package: boot ## ## Attaching package: 'boot' ## ## The following object is masked from 'package:psych': ## ## logit ## ## Loading required package: lattice ## ## Attaching package: 'lattice' ## ## The following object is masked from 'package:boot': ## ## melanoma ## ## ## Attaching package: 'nFactors' ## ## The following object is masked from 'package:lattice': ## ## parallel names(d0[,c(2:4, 23:33, 35:48)]) ## [1] "q08_01" "q08_02" "q08_03" "q25_01" "q25_02" "q25_03" "q26_01" "q27_01" "q28_01" "q28_02" "q28_03" "q28_04" "q29_01" "q30_01" "q32_01" "q32_02" "q32_03" "q33_01" "q34_01" "q35_01" "q35_02" ## [22] "q35_03" "q35_04" "q36_01" "q37_01" "q38_01" "q39_01" "q40_01" ev <- eigen(cor(d0[,c(2:4, 23:33, 35:48)])) # get eigenvalues ev$values Appendix E Survey Factor Analysis 425 ## [1] 14.8460868 1.7323258 1.6779159 1.6264116 1.0976391 0.7405932 0.5876864 0.5482708 0.4519692 0.4110543 0.3913114 0.3663730 0.3318280 0.3257342 0.3086776 0.2769535 0.2562830 ## [18] 0.2540574 0.2255226 0.2124030 0.2042337 0.1978703 0.1797521 0.1652866 0.1628598 0.1552043 0.1444238 0.1212727 nS <- nScree(x=ev$values) plotnScree(nS) BFM Model EFA 2 fit <- factanal(d0[,c(2:4, 23:33, 35:48)], factor =5, rotation = "promax") print(fit, digits=2, cutoff=.4, sort=TRUE) ## ## Call: ## factanal(x = d0[, c(2:4, 23:33, 35:48)], factors = 5, rotation = "promax") ## ## Uniquenesses: ## q08_01 q08_02 q08_03 q25_01 q25_02 q25_03 q26_01 q27_01 q28_01 q28_02 q28_03 q28_04 q29_01 q30_01 q32_01 q32_02 q32_03 Appendix E Survey Factor Analysis 426 q33_01 q34_01 q35_01 q35_02 q35_03 q35_04 q36_01 q37_01 q38_01 q39_01 q40_01 ## 0.25 0.34 0.41 0.41 0.35 0.31 0.37 0.83 0.36 0.27 0.24 0.27 0.24 0.41 0.26 0.25 0.25 0.28 0.82 0.26 0.22 0.20 0.20 0.21 0.32 0.27 0.28 0.24 ## ## Loadings: ## Factor1 Factor2 Factor3 Factor4 Factor5 ## q29_01 0.55 0.42 ## q30_01 0.56 ## q33_01 0.77 ## q35_01 0.57 ## q35_02 0.65 ## q35_03 0.72 ## q35_04 0.72 ## q36_01 0.82 ## q37_01 0.76 ## q38_01 0.76 ## q39_01 0.76 ## q40_01 0.80 ## q28_01 0.74 ## q28_02 0.81 ## q28_03 0.83 ## q28_04 0.82 ## q25_01 0.56 ## q25_02 0.76 ## q25_03 0.72 ## q32_01 0.74 ## q32_02 0.77 ## q32_03 0.76 ## q08_01 0.76 ## q08_02 0.76 ## q08_03 0.64 ## q26_01 0.47 0.44 ## q27_01 0.41 ## q34_01 ## ## Factor1 Factor2 Factor3 Factor4 Factor5 ## SS loadings 6.85 3.28 3.23 1.71 1.00 ## Proportion Var 0.24 0.12 0.12 0.06 0.04 ## Cumulative Var 0.24 0.36 0.48 0.54 0.57 ## ## Factor Correlations: Appendix E Survey Factor Analysis 427 ## Factor1 Factor2 Factor3 Factor4 Factor5 ## Factor1 1.00 0.64 0.63 0.62 0.27 ## Factor2 0.64 1.00 0.57 0.55 0.30 ## Factor3 0.63 0.57 1.00 0.50 0.29 ## Factor4 0.62 0.55 0.50 1.00 0.35 ## Factor5 0.27 0.30 0.29 0.35 1.00 ## ## Test of the hypothesis that 5 factors are sufficient. ## The chi square statistic is 1070.38 on 248 degrees of freedom. ## The p-value is 1.61e-102 > fit <- factanal(d0[,c(2:4, 23:25, 28:33, 35:38, 40:48)], factor =4, rotation = "promax") > print(fit, digits=2, cutoff=.4, sort=TRUE) Call: factanal(x = d0[, c(2:4, 23:25, 28:33, 35:38, 40:48)], factors = 4, rotation = "promax") Uniquenesses: q08_01 q08_02 q08_03 q25_01 q25_02 q25_03 q28_01 q28_02 q28_03 q28_04 q29_01 q30_01 q32_01 q32_02 Appendix E Survey Factor Analysis 428 0.28 0.32 0.40 0.42 0.37 0.33 0.36 0.26 0.24 0.26 0.37 0.42 0.26 0.25 q32_03 q33_01 q35_01 q35_02 q35_03 q35_04 q36_01 q37_01 q38_01 q39_01 q40_01 0.25 0.32 0.30 0.28 0.23 0.23 0.26 0.31 0.27 0.27 0.23 Loadings: Factor1 Factor2 Factor3 Factor4 q29_01 0.54 q30_01 0.59 q33_01 0.77 q35_01 0.58 q35_02 0.65 q35_03 0.72 q35_04 0.73 q36_01 0.81 q37_01 0.80 q38_01 0.79 q39_01 0.79 q40_01 0.84 q28_01 0.75 q28_02 0.83 q28_03 0.85 q28_04 0.84 q25_01 0.56 q25_02 0.76 q25_03 0.71 q32_01 0.75 q32_02 0.77 q32_03 0.77 q08_01 0.74 q08_02 0.79 q08_03 0.73 Factor1 Factor2 Factor3 Factor4 SS loadings 6.92 3.39 3.24 1.79 Appendix E Survey Factor Analysis 429 Proportion Var 0.28 0.14 0.13 0.07 Cumulative Var 0.28 0.41 0.54 0.61 Factor Correlations: Factor1 Factor2 Factor3 Factor4 Factor1 1.00 0.65 0.64 0.66 Factor2 0.65 1.00 0.58 0.59 Factor3 0.64 0.58 1.00 0.51 Factor4 0.66 0.59 0.51 1.00 Test of the hypothesis that 4 factors are sufficient. The chi square statistic is 943.56 on 206 degrees of freedom. The p-value is 8.97e-95 names(d0[,c(2:4, 23:25, 27:33, 35:38, 40:48)]) ## [1] "q08_01" "q08_02" "q08_03" "q25_01" "q25_02" "q25_03" "q27_01" "q28_01" "q28_02" "q28_03" "q28_04" "q29_01" "q30_01" "q32_01" "q32_02" "q32_03" "q33_01" "q35_01" "q35_02" "q35_03" "q35_04" ## [22] "q36_01" "q37_01" "q38_01" "q39_01" "q40_01" model <- ' AE2B =~ q08_01 + q08_02 + q08_03 AE1B =~ q29_01 + q30_01 + q33_01 + q35_01+ q35_02 + q35_03+ q35_04 + q36_01 + q37_01 + q38_01 + q39_01 + q40_01 SIR =~ q28_01 +q28_02 + q28_03+q28_04 DL =~ q25_01 +q25_02 +q25_03 +q32_01 + q32_02 + q32_03 Appendix E Survey Factor Analysis 430 Appendix E2 CFA Factor Analysis CFA Assessment AZM Model CFA model <- ' IS2 =~ q15_01 + q19_01 + q20_01 + q21_01 + q22_01 + q23_01 AE2 =~ q08_01 + q08_02 + q08_03 + q08_04 + q08_05 SOPI2 =~ q09_01 + q10_01 + q11_01 + q12_01 + q13_01 + q14_01 + q17_01 AE1 =~ q30_01 + q37_01 + q38_01 + q39_01 + q40_01+ q24_01 fit <- cfa(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") ## Found more than one class "Model" in cache; using the first, from namespace 'lavaan' summary(fit, fit.measures=TRUE, standardized = TRUE) ## lavaan (0.5-20) converged normally after 54 iterations ## ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 606.692 433.358 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.400 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 8230.257 6011.392 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.955 0.967 ## Tucker-Lewis Index (TLI) 0.949 0.963 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -18571.759 -18571.759 ## Loglikelihood unrestricted model (H1) -18268.412 -18268.412 ## ## Number of free parameters 78 78 ## Akaike (AIC) 37299.517 37299.517 ## Bayesian (BIC) 37628.878 37628.878 Appendix E Survey Factor Analysis 431 ## Sample-size adjusted Bayesian (BIC) 37381.299 37381.299 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.054 0.039 ## 90 Percent Confidence Interval 0.049 0.059 0.034 0.044 ## P-value RMSEA <= 0.05 0.113 1.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.051 0.051 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.013 0.654 ## q19_01 0.956 0.066 14.474 0.000 0.969 0.645 ## q20_01 1.060 0.068 15.528 0.000 1.074 0.732 ## q21_01 1.163 0.065 17.932 0.000 1.179 0.851 ## q22_01 1.139 0.065 17.654 0.000 1.154 0.855 ## q23_01 1.116 0.065 17.273 0.000 1.131 0.841 ## AE2 =~ ## q08_01 1.000 1.363 0.845 ## q08_02 0.904 0.040 22.333 0.000 1.232 0.800 ## q08_03 0.911 0.038 24.229 0.000 1.241 0.786 ## q08_04 0.876 0.042 20.852 0.000 1.194 0.833 ## q08_05 0.942 0.039 24.172 0.000 1.284 0.835 ## SOPI2 =~ ## q09_01 1.000 1.234 0.680 ## q10_01 1.171 0.074 15.729 0.000 1.445 0.777 ## q11_01 0.867 0.073 11.795 0.000 1.070 0.598 ## q12_01 1.163 0.076 15.234 0.000 1.435 0.768 ## q13_01 1.252 0.078 16.025 0.000 1.546 0.799 ## q14_01 0.816 0.062 13.081 0.000 1.008 0.610 ## q17_01 0.468 0.074 6.302 0.000 0.577 0.344 ## AE1 =~ ## q30_01 1.000 0.991 0.758 ## q37_01 1.166 0.057 20.453 0.000 1.156 0.849 ## q38_01 1.240 0.063 19.599 0.000 1.228 0.875 ## q39_01 1.198 0.063 18.889 0.000 1.187 0.863 ## q40_01 1.230 0.055 22.345 0.000 1.218 0.891 ## q24_01 1.083 0.064 16.906 0.000 1.073 0.719 Appendix E Survey Factor Analysis 432 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~~ ## AE2 0.887 0.099 8.919 0.000 0.642 0.642 ## SOPI2 0.574 0.094 6.119 0.000 0.459 0.459 ## AE1 0.835 0.090 9.331 0.000 0.832 0.832 ## AE2 ~~ ## SOPI2 0.623 0.103 6.062 0.000 0.370 0.370 ## AE1 1.017 0.105 9.643 0.000 0.753 0.753 ## SOPI2 ~~ ## AE1 0.530 0.084 6.332 0.000 0.434 0.434 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.895 0.069 70.938 0.000 4.895 3.160 ## q19_01 4.923 0.067 73.546 0.000 4.923 3.276 ## q20_01 4.792 0.065 73.312 0.000 4.792 3.266 ## q21_01 4.925 0.062 79.810 0.000 4.925 3.555 ## q22_01 5.024 0.060 83.581 0.000 5.024 3.723 ## q23_01 5.058 0.060 84.437 0.000 5.058 3.761 ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 ## q08_04 5.357 0.064 83.896 0.000 5.357 3.737 ## q08_05 5.254 0.068 76.714 0.000 5.254 3.417 ## q09_01 3.938 0.081 48.733 0.000 3.938 2.171 ## q10_01 3.885 0.083 46.895 0.000 3.885 2.089 ## q11_01 4.248 0.080 53.325 0.000 4.248 2.375 ## q12_01 3.921 0.083 47.117 0.000 3.921 2.099 ## q13_01 3.921 0.086 45.466 0.000 3.921 2.025 ## q14_01 4.234 0.074 57.521 0.000 4.234 2.562 ## q17_01 3.857 0.075 51.557 0.000 3.857 2.297 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q24_01 5.175 0.066 77.907 0.000 5.175 3.470 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.373 0.102 13.395 0.000 1.373 0.572 Appendix E Survey Factor Analysis 433 ## q19_01 1.320 0.122 10.845 0.000 1.320 0.584 ## q20_01 0.999 0.078 12.807 0.000 0.999 0.464 ## q21_01 0.529 0.052 10.135 0.000 0.529 0.276 ## q22_01 0.489 0.047 10.513 0.000 0.489 0.268 ## q23_01 0.529 0.056 9.373 0.000 0.529 0.292 ## q08_01 0.746 0.070 10.686 0.000 0.746 0.287 ## q08_02 0.854 0.079 10.775 0.000 0.854 0.360 ## q08_03 0.952 0.096 9.916 0.000 0.952 0.382 ## q08_04 0.629 0.070 8.936 0.000 0.629 0.306 ## q08_05 0.714 0.083 8.596 0.000 0.714 0.302 ## q09_01 1.768 0.147 12.057 0.000 1.768 0.537 ## q10_01 1.370 0.140 9.783 0.000 1.370 0.396 ## q11_01 2.054 0.161 12.777 0.000 2.054 0.642 ## q12_01 1.430 0.129 11.058 0.000 1.430 0.410 ## q13_01 1.358 0.133 10.174 0.000 1.358 0.362 ## q14_01 1.715 0.146 11.759 0.000 1.715 0.628 ## q17_01 2.488 0.150 16.534 0.000 2.488 0.882 ## q30_01 0.725 0.066 10.932 0.000 0.725 0.425 ## q37_01 0.517 0.037 13.934 0.000 0.517 0.279 ## q38_01 0.463 0.042 10.975 0.000 0.463 0.235 ## q39_01 0.484 0.042 11.403 0.000 0.484 0.256 ## q40_01 0.384 0.051 7.535 0.000 0.384 0.205 ## q24_01 1.073 0.103 10.460 0.000 1.073 0.483 ## IS2 1.027 0.122 8.402 0.000 1.000 1.000 ## AE2 1.857 0.160 11.609 0.000 1.000 1.000 ## SOPI2 1.524 0.175 8.695 0.000 1.000 1.000 ## AE1 0.982 0.122 8.070 0.000 1.000 1.000 model <- ' IS2 =~ q15_01 + q19_01 + q20_01 + q21_01 + q22_01 + q23_01 AE2 =~ q08_01 + q08_02 + q08_03 + q08_04 + q08_05 SOPI2 =~ q09_01 + q10_01 + q11_01 + q12_01 + q13_01 + q14_01 + q17_01 AE1 =~ q30_01 + q37_01 + q38_01 + q39_01 + q40_01+ q24_01 IS2 ~ SOPI2 +AE2 AE1 ~ IS2 BFM Model CFA fit <- cfa(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") ## Found more than one class "Model" in cache; using the first, from namespace 'lavaan' summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 57 iterations ## Appendix E Survey Factor Analysis 434 ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 1416.949 877.133 ## Degrees of freedom 269 269 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.615 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 12001.381 7638.770 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.902 0.917 ## Tucker-Lewis Index (TLI) 0.891 0.908 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -17065.369 -17065.369 ## Loglikelihood unrestricted model (H1) -16356.894 -16356.894 ## ## Number of free parameters 81 81 ## Akaike (AIC) 34292.738 34292.738 ## Bayesian (BIC) 34634.766 34634.766 ## Sample-size adjusted Bayesian (BIC) 34377.665 34377.665 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.092 0.067 ## 90 Percent Confidence Interval 0.087 0.097 0.063 0.071 ## P-value RMSEA <= 0.05 0.000 0.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.049 0.049 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all Appendix E Survey Factor Analysis 435 ## AE2B =~ ## q08_01 1.000 1.401 0.869 ## q08_02 0.887 0.049 18.180 0.000 1.244 0.808 ## q08_03 0.861 0.045 19.280 0.000 1.207 0.764 ## AE1B =~ ## q29_01 1.000 1.046 0.774 ## q30_01 0.937 0.054 17.245 0.000 0.979 0.750 ## q33_01 1.056 0.051 20.591 0.000 1.104 0.812 ## q35_01 1.177 0.055 21.251 0.000 1.231 0.831 ## q35_02 1.182 0.057 20.565 0.000 1.236 0.853 ## q35_03 1.189 0.054 21.988 0.000 1.243 0.879 ## q35_04 1.231 0.058 21.196 0.000 1.287 0.880 ## q36_01 1.084 0.043 25.500 0.000 1.134 0.858 ## q37_01 1.070 0.053 20.307 0.000 1.119 0.822 ## q38_01 1.130 0.051 21.982 0.000 1.181 0.841 ## q39_01 1.110 0.052 21.242 0.000 1.161 0.844 ## q40_01 1.130 0.049 23.055 0.000 1.182 0.865 ## SIR =~ ## q28_01 1.000 1.083 0.785 ## q28_02 1.130 0.051 22.059 0.000 1.224 0.876 ## q28_03 1.095 0.053 20.566 0.000 1.186 0.887 ## q28_04 1.042 0.052 19.918 0.000 1.128 0.860 ## DL =~ ## q25_01 1.000 1.017 0.708 ## q25_02 1.013 0.058 17.440 0.000 1.029 0.704 ## q25_03 0.957 0.053 17.920 0.000 0.972 0.680 ## q32_01 1.275 0.070 18.203 0.000 1.296 0.867 ## q32_02 1.303 0.079 16.540 0.000 1.325 0.867 ## q32_03 1.338 0.077 17.295 0.000 1.360 0.866 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B ~~ ## AE1B 1.096 0.117 9.348 0.000 0.748 0.748 ## SIR 0.917 0.099 9.299 0.000 0.604 0.604 ## DL 0.827 0.103 8.053 0.000 0.581 0.581 ## AE1B ~~ ## SIR 0.851 0.099 8.564 0.000 0.751 0.751 ## DL 0.819 0.094 8.709 0.000 0.771 0.771 ## SIR ~~ ## DL 0.633 0.089 7.147 0.000 0.575 0.575 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 Appendix E Survey Factor Analysis 436 ## q29_01 5.312 0.060 88.254 0.000 5.312 3.931 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q33_01 5.256 0.061 86.772 0.000 5.256 3.865 ## q35_01 4.970 0.066 75.305 0.000 4.970 3.354 ## q35_02 5.065 0.065 78.458 0.000 5.065 3.495 ## q35_03 5.198 0.063 82.516 0.000 5.198 3.676 ## q35_04 5.129 0.065 78.783 0.000 5.129 3.509 ## q36_01 5.329 0.059 90.580 0.000 5.329 4.035 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q28_01 5.266 0.061 85.641 0.000 5.266 3.815 ## q28_02 5.192 0.062 83.413 0.000 5.192 3.716 ## q28_03 5.373 0.060 90.173 0.000 5.373 4.017 ## q28_04 5.345 0.058 91.444 0.000 5.345 4.073 ## q25_01 4.877 0.064 76.279 0.000 4.877 3.398 ## q25_02 4.835 0.065 74.247 0.000 4.835 3.307 ## q25_03 4.956 0.064 77.796 0.000 4.956 3.465 ## q32_01 4.819 0.067 72.368 0.000 4.819 3.224 ## q32_02 4.851 0.068 71.303 0.000 4.851 3.176 ## q32_03 4.885 0.070 69.823 0.000 4.885 3.110 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 0.640 0.078 8.225 0.000 0.640 0.246 ## q08_02 0.825 0.091 9.024 0.000 0.825 0.348 ## q08_03 1.036 0.117 8.856 0.000 1.036 0.416 ## q29_01 0.732 0.061 12.095 0.000 0.732 0.401 ## q30_01 0.748 0.062 12.081 0.000 0.748 0.438 ## q33_01 0.630 0.054 11.753 0.000 0.630 0.341 ## q35_01 0.680 0.043 15.762 0.000 0.680 0.310 ## q35_02 0.574 0.046 12.352 0.000 0.574 0.273 ## q35_03 0.455 0.035 12.898 0.000 0.455 0.227 ## q35_04 0.480 0.035 13.793 0.000 0.480 0.225 ## q36_01 0.460 0.028 16.636 0.000 0.460 0.263 ## q37_01 0.600 0.039 15.462 0.000 0.600 0.324 ## q38_01 0.577 0.045 12.918 0.000 0.577 0.293 ## q39_01 0.546 0.038 14.171 0.000 0.546 0.288 ## q40_01 0.471 0.050 9.468 0.000 0.471 0.252 ## q28_01 0.733 0.086 8.530 0.000 0.733 0.384 ## q28_02 0.455 0.039 11.737 0.000 0.455 0.233 ## q28_03 0.382 0.050 7.686 0.000 0.382 0.214 Appendix E Survey Factor Analysis 437 ## q28_04 0.449 0.045 10.053 0.000 0.449 0.261 ## q25_01 1.027 0.067 15.308 0.000 1.027 0.498 ## q25_02 1.078 0.079 13.686 0.000 1.078 0.504 ## q25_03 1.100 0.047 23.492 0.000 1.100 0.538 ## q32_01 0.555 0.047 11.734 0.000 0.555 0.248 ## q32_02 0.579 0.067 8.626 0.000 0.579 0.248 ## q32_03 0.616 0.055 11.256 0.000 0.616 0.250 ## AE2B 1.964 0.174 11.274 0.000 1.000 1.000 ## AE1B 1.093 0.126 8.650 0.000 1.000 1.000 ## SIR 1.173 0.126 9.284 0.000 1.000 1.000 ## DL 1.033 0.121 8.554 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.754 ## q08_02 0.652 ## q08_03 0.584 ## q29_01 0.599 ## q30_01 0.562 ## q33_01 0.659 ## q35_01 0.690 ## q35_02 0.727 ## q35_03 0.773 ## q35_04 0.775 ## q36_01 0.737 ## q37_01 0.676 ## q38_01 0.707 ## q39_01 0.712 ## q40_01 0.748 ## q28_01 0.616 ## q28_02 0.767 ## q28_03 0.786 ## q28_04 0.739 ## q25_01 0.502 ## q25_02 0.496 ## q25_03 0.462 ## q32_01 0.752 ## q32_02 0.752 ## q32_03 0.750 model <- ' AE2B =~ q08_01 + q08_02 + q08_03 AE1B =~ q29_01 + q30_01 + q33_01+ q35_01+ q35_02 + q35_03+ q35_04 + q36_01 + q37_01 + q38_01 + q39_01 + q40_01 SIR =~ q28_01 +q28_02 + q28_03+q28_04 Appendix E Survey Factor Analysis 438 DL =~ q25_01 +q25_02 +q25_03 +q32_01 + q32_02 + q32_03 AE1B~ SIR + DL AE2B ~ SIR +DL Appendix E Survey Factor Analysis 439 Appendix E3 SEM Factor Analysis SEM Assessment AZM Model SEM 1 fit <- sem(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE) ## lavaan (0.5-20) converged normally after 43 iterations ## ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 694.035 495.983 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.399 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 8230.257 6011.392 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.944 0.957 ## Tucker-Lewis Index (TLI) 0.938 0.952 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -18615.430 -18615.430 ## Loglikelihood unrestricted model (H1) -18268.412 -18268.412 ## ## Number of free parameters 76 76 ## Akaike (AIC) 37382.859 37382.859 ## Bayesian (BIC) 37703.775 37703.775 ## Sample-size adjusted Bayesian (BIC) 37462.544 37462.544 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.060 0.045 ## 90 Percent Confidence Interval 0.055 0.065 0.040 0.049 Appendix E Survey Factor Analysis 440 ## P-value RMSEA <= 0.05 0.001 0.969 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.062 0.062 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.012 0.653 ## q19_01 0.953 0.065 14.731 0.000 0.964 0.642 ## q20_01 1.060 0.068 15.505 0.000 1.073 0.731 ## q21_01 1.160 0.064 18.124 0.000 1.174 0.847 ## q22_01 1.127 0.063 17.849 0.000 1.140 0.845 ## q23_01 1.109 0.064 17.401 0.000 1.123 0.835 ## AE2 =~ ## q08_01 1.000 1.360 0.843 ## q08_02 0.905 0.038 24.001 0.000 1.231 0.800 ## q08_03 0.911 0.038 24.081 0.000 1.240 0.785 ## q08_04 0.878 0.038 23.290 0.000 1.195 0.834 ## q08_05 0.947 0.039 24.432 0.000 1.288 0.838 ## SOPI2 =~ ## q09_01 1.000 1.234 0.680 ## q10_01 1.171 0.074 15.809 0.000 1.445 0.777 ## q11_01 0.867 0.073 11.893 0.000 1.070 0.598 ## q12_01 1.162 0.075 15.442 0.000 1.435 0.768 ## q13_01 1.253 0.077 16.193 0.000 1.546 0.799 ## q14_01 0.817 0.061 13.315 0.000 1.008 0.610 ## q17_01 0.467 0.074 6.339 0.000 0.577 0.343 ## AE1 =~ ## q30_01 1.000 0.993 0.760 ## q37_01 1.165 0.054 21.519 0.000 1.157 0.850 ## q38_01 1.237 0.061 20.134 0.000 1.228 0.875 ## q39_01 1.192 0.061 19.522 0.000 1.184 0.860 ## q40_01 1.228 0.055 22.529 0.000 1.220 0.893 ## q24_01 1.077 0.064 16.794 0.000 1.070 0.717 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.205 0.037 5.483 0.000 0.249 0.249 ## AE2 0.444 0.037 11.940 0.000 0.597 0.597 Appendix E Survey Factor Analysis 441 ## AE1 ~ ## IS2 0.841 0.062 13.492 0.000 0.857 0.857 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## SOPI2 0.622 0.100 6.205 0.000 0.370 0.370 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.895 0.069 70.938 0.000 4.895 3.160 ## q19_01 4.923 0.067 73.546 0.000 4.923 3.276 ## q20_01 4.792 0.065 73.312 0.000 4.792 3.266 ## q21_01 4.925 0.062 79.810 0.000 4.925 3.555 ## q22_01 5.024 0.060 83.581 0.000 5.024 3.723 ## q23_01 5.058 0.060 84.437 0.000 5.058 3.761 ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 ## q08_04 5.357 0.064 83.896 0.000 5.357 3.737 ## q08_05 5.254 0.068 76.714 0.000 5.254 3.417 ## q09_01 3.938 0.081 48.733 0.000 3.938 2.171 ## q10_01 3.885 0.083 46.895 0.000 3.885 2.089 ## q11_01 4.248 0.080 53.325 0.000 4.248 2.375 ## q12_01 3.921 0.083 47.117 0.000 3.921 2.099 ## q13_01 3.921 0.086 45.466 0.000 3.921 2.025 ## q14_01 4.234 0.074 57.521 0.000 4.234 2.562 ## q17_01 3.857 0.075 51.557 0.000 3.857 2.297 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q24_01 5.175 0.066 77.907 0.000 5.175 3.470 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.375 0.102 13.433 0.000 1.375 0.573 ## q19_01 1.328 0.121 10.949 0.000 1.328 0.588 ## q20_01 1.002 0.078 12.878 0.000 1.002 0.466 ## q21_01 0.541 0.052 10.362 0.000 0.541 0.282 ## q22_01 0.520 0.046 11.295 0.000 0.520 0.286 ## q23_01 0.548 0.056 9.727 0.000 0.548 0.303 Appendix E Survey Factor Analysis 442 ## q08_01 0.753 0.071 10.625 0.000 0.753 0.289 ## q08_02 0.856 0.079 10.782 0.000 0.856 0.361 ## q08_03 0.956 0.097 9.891 0.000 0.956 0.384 ## q08_04 0.627 0.070 9.021 0.000 0.627 0.305 ## q08_05 0.705 0.082 8.568 0.000 0.705 0.298 ## q09_01 1.768 0.147 12.068 0.000 1.768 0.537 ## q10_01 1.370 0.140 9.764 0.000 1.370 0.396 ## q11_01 2.053 0.160 12.814 0.000 2.053 0.642 ## q12_01 1.432 0.130 11.053 0.000 1.432 0.410 ## q13_01 1.357 0.133 10.192 0.000 1.357 0.362 ## q14_01 1.715 0.146 11.773 0.000 1.715 0.628 ## q17_01 2.488 0.151 16.475 0.000 2.488 0.882 ## q30_01 0.720 0.067 10.818 0.000 0.720 0.422 ## q37_01 0.514 0.037 13.760 0.000 0.514 0.277 ## q38_01 0.463 0.043 10.884 0.000 0.463 0.235 ## q39_01 0.492 0.043 11.445 0.000 0.492 0.260 ## q40_01 0.380 0.051 7.387 0.000 0.380 0.203 ## q24_01 1.079 0.103 10.470 0.000 1.079 0.485 ## IS2 0.483 0.058 8.274 0.000 0.471 0.471 ## AE2 1.851 0.153 12.114 0.000 1.000 1.000 ## SOPI2 1.524 0.174 8.744 0.000 1.000 1.000 ## AE1 0.263 0.033 7.892 0.000 0.266 0.266 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 443 model <- IS2 =~ q15_01 + q19_01 + q20_01 + q21_01 + q22_01 + q23_01 AE2 =~ q08_01 + q08_02 + q08_03 + q08_04 + q08_05 SOPI2 =~ q09_01 + q10_01 + q11_01 + q12_01 + q13_01 + q14_01 + q17_01 AE1 =~ q30_01 + q37_01 + q38_01 + q39_01 + q40_01+ q24_01 IS2 ~ SOPI2 AE2 ~ IS2 AE1 ~ IS2 AZM Model SEM 2 fit <- sem(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE) Appendix E Survey Factor Analysis 444 ## lavaan (0.5-20) converged normally after 51 iterations ## ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 611.744 437.754 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.397 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 8230.257 6011.392 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.954 0.967 ## Tucker-Lewis Index (TLI) 0.949 0.963 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -18574.284 -18574.284 ## Loglikelihood unrestricted model (H1) -18268.412 -18268.412 ## ## Number of free parameters 76 76 ## Akaike (AIC) 37300.569 37300.569 ## Bayesian (BIC) 37621.484 37621.484 ## Sample-size adjusted Bayesian (BIC) 37380.253 37380.253 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.054 0.039 ## 90 Percent Confidence Interval 0.049 0.059 0.034 0.044 ## P-value RMSEA <= 0.05 0.112 1.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.053 0.053 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## Appendix E Survey Factor Analysis 445 ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.015 0.655 ## q19_01 0.955 0.066 14.543 0.000 0.969 0.645 ## q20_01 1.059 0.068 15.560 0.000 1.075 0.732 ## q21_01 1.160 0.064 17.998 0.000 1.177 0.850 ## q22_01 1.135 0.064 17.682 0.000 1.152 0.854 ## q23_01 1.114 0.064 17.323 0.000 1.130 0.841 ## AE2 =~ ## q08_01 1.000 1.363 0.844 ## q08_02 0.905 0.040 22.531 0.000 1.233 0.801 ## q08_03 0.911 0.037 24.411 0.000 1.241 0.786 ## q08_04 0.876 0.042 20.883 0.000 1.194 0.833 ## q08_05 0.942 0.039 24.404 0.000 1.284 0.835 ## SOPI2 =~ ## q09_01 1.000 1.234 0.680 ## q10_01 1.172 0.073 15.988 0.000 1.446 0.777 ## q11_01 0.869 0.073 11.950 0.000 1.072 0.599 ## q12_01 1.162 0.076 15.387 0.000 1.434 0.768 ## q13_01 1.253 0.077 16.309 0.000 1.546 0.799 ## q14_01 0.817 0.060 13.532 0.000 1.008 0.610 ## q17_01 0.467 0.073 6.395 0.000 0.576 0.343 ## AE1 =~ ## q30_01 1.000 0.991 0.759 ## q37_01 1.166 0.056 20.712 0.000 1.156 0.849 ## q38_01 1.239 0.063 19.821 0.000 1.228 0.875 ## q39_01 1.197 0.063 19.101 0.000 1.187 0.863 ## q40_01 1.229 0.054 22.572 0.000 1.218 0.891 ## q24_01 1.082 0.064 17.001 0.000 1.072 0.719 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.386 0.049 7.839 0.000 0.470 0.470 ## AE2 ~ ## IS2 0.868 0.067 13.003 0.000 0.646 0.646 ## AE1 ~ ## IS2 0.815 0.062 13.237 0.000 0.835 0.835 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.289 0.045 6.374 0.000 0.509 0.509 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all Appendix E Survey Factor Analysis 446 ## q15_01 4.895 0.069 70.938 0.000 4.895 3.160 ## q19_01 4.923 0.067 73.546 0.000 4.923 3.276 ## q20_01 4.792 0.065 73.312 0.000 4.792 3.266 ## q21_01 4.925 0.062 79.810 0.000 4.925 3.555 ## q22_01 5.024 0.060 83.581 0.000 5.024 3.723 ## q23_01 5.058 0.060 84.437 0.000 5.058 3.761 ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 ## q08_04 5.357 0.064 83.896 0.000 5.357 3.737 ## q08_05 5.254 0.068 76.714 0.000 5.254 3.417 ## q09_01 3.938 0.081 48.733 0.000 3.938 2.171 ## q10_01 3.885 0.083 46.895 0.000 3.885 2.089 ## q11_01 4.248 0.080 53.325 0.000 4.248 2.375 ## q12_01 3.921 0.083 47.117 0.000 3.921 2.099 ## q13_01 3.921 0.086 45.466 0.000 3.921 2.025 ## q14_01 4.234 0.074 57.521 0.000 4.234 2.562 ## q17_01 3.857 0.075 51.557 0.000 3.857 2.297 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q24_01 5.175 0.066 77.907 0.000 5.175 3.470 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.370 0.103 13.359 0.000 1.370 0.571 ## q19_01 1.318 0.122 10.845 0.000 1.318 0.584 ## q20_01 0.998 0.078 12.767 0.000 0.998 0.464 ## q21_01 0.532 0.052 10.205 0.000 0.532 0.277 ## q22_01 0.494 0.046 10.628 0.000 0.494 0.271 ## q23_01 0.531 0.056 9.404 0.000 0.531 0.294 ## q08_01 0.747 0.070 10.694 0.000 0.747 0.287 ## q08_02 0.850 0.079 10.770 0.000 0.850 0.359 ## q08_03 0.953 0.096 9.914 0.000 0.953 0.382 ## q08_04 0.629 0.071 8.906 0.000 0.629 0.306 ## q08_05 0.716 0.083 8.610 0.000 0.716 0.303 ## q09_01 1.770 0.147 12.072 0.000 1.770 0.538 ## q10_01 1.368 0.140 9.786 0.000 1.368 0.396 ## q11_01 2.050 0.160 12.810 0.000 2.050 0.641 ## q12_01 1.433 0.130 11.013 0.000 1.433 0.411 ## q13_01 1.358 0.133 10.205 0.000 1.358 0.362 Appendix E Survey Factor Analysis 447 ## q14_01 1.715 0.146 11.776 0.000 1.715 0.628 ## q17_01 2.489 0.151 16.469 0.000 2.489 0.882 ## q30_01 0.724 0.066 10.916 0.000 0.724 0.424 ## q37_01 0.516 0.037 13.889 0.000 0.516 0.279 ## q38_01 0.463 0.042 10.972 0.000 0.463 0.235 ## q39_01 0.484 0.042 11.413 0.000 0.484 0.256 ## q40_01 0.384 0.051 7.527 0.000 0.384 0.206 ## q24_01 1.074 0.103 10.468 0.000 1.074 0.483 ## IS2 0.803 0.088 9.162 0.000 0.779 0.779 ## AE2 1.081 0.115 9.426 0.000 0.582 0.582 ## SOPI2 1.522 0.174 8.764 0.000 1.000 1.000 ## AE1 0.298 0.045 6.648 0.000 0.303 0.303 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) AZM Model SEM 2 Mediation Effects model <- IS2 =~ q15_01 + q19_01 + q20_01 + q21_01 + q22_01 + q23_01 AE2 =~ q08_01 + q08_02 + q08_03 + q08_04 + q08_05 Appendix E Survey Factor Analysis 448 SOPI2 =~ q09_01 + q10_01 + q11_01 + q12_01 + q13_01 + q14_01 + q17_01 AE1 =~ q30_01 + q37_01 + q38_01 + q39_01 + q40_01+ q24_01 IS2 ~ a*SOPI2 AE2 ~ d*IS2 AE1 ~ e*IS2 AE2 ~ b*SOPI2 AE1 ~ c*SOPI2 ### Direct (b, c) DirectAE1 := b DirectAE2 := c ### Indirect (a*b) IndirectAE1 := a*d IndirectAE2 := a*e ### Total TotalAE1 := b + (a*d) TotalAE2 := c + (a*e) fit <- sem(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE) ## lavaan (0.5-20) converged normally after 49 iterations ## ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 606.692 433.358 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.400 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 8230.257 6011.392 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## Appendix E Survey Factor Analysis 449 ## Comparative Fit Index (CFI) 0.955 0.967 ## Tucker-Lewis Index (TLI) 0.949 0.963 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -18571.759 -18571.759 ## Loglikelihood unrestricted model (H1) -18268.412 -18268.412 ## ## Number of free parameters 78 78 ## Akaike (AIC) 37299.517 37299.517 ## Bayesian (BIC) 37628.878 37628.878 ## Sample-size adjusted Bayesian (BIC) 37381.299 37381.299 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.054 0.039 ## 90 Percent Confidence Interval 0.049 0.059 0.034 0.044 ## P-value RMSEA <= 0.05 0.113 1.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.051 0.051 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.013 0.654 ## q19_01 0.956 0.066 14.474 0.000 0.969 0.645 ## q20_01 1.060 0.068 15.527 0.000 1.074 0.732 ## q21_01 1.163 0.065 17.932 0.000 1.179 0.851 ## q22_01 1.139 0.065 17.654 0.000 1.154 0.855 ## q23_01 1.116 0.065 17.273 0.000 1.131 0.841 ## AE2 =~ ## q08_01 1.000 1.363 0.845 ## q08_02 0.904 0.040 22.333 0.000 1.232 0.800 ## q08_03 0.911 0.038 24.229 0.000 1.241 0.786 ## q08_04 0.876 0.042 20.852 0.000 1.194 0.833 ## q08_05 0.942 0.039 24.172 0.000 1.284 0.835 ## SOPI2 =~ ## q09_01 1.000 1.234 0.680 ## q10_01 1.171 0.074 15.729 0.000 1.445 0.777 ## q11_01 0.867 0.073 11.795 0.000 1.070 0.598 Appendix E Survey Factor Analysis 450 ## q12_01 1.163 0.076 15.234 0.000 1.435 0.768 ## q13_01 1.252 0.078 16.025 0.000 1.546 0.799 ## q14_01 0.816 0.062 13.081 0.000 1.008 0.610 ## q17_01 0.468 0.074 6.302 0.000 0.577 0.344 ## AE1 =~ ## q30_01 1.000 0.991 0.758 ## q37_01 1.166 0.057 20.453 0.000 1.156 0.849 ## q38_01 1.240 0.063 19.599 0.000 1.228 0.875 ## q39_01 1.198 0.063 18.889 0.000 1.187 0.863 ## q40_01 1.230 0.055 22.345 0.000 1.218 0.891 ## q24_01 1.083 0.064 16.906 0.000 1.073 0.719 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 (a) 0.377 0.050 7.603 0.000 0.459 0.459 ## AE2 ~ ## IS2 (d) 0.805 0.072 11.135 0.000 0.599 0.599 ## AE1 ~ ## IS2 (e) 0.784 0.064 12.339 0.000 0.802 0.802 ## AE2 ~ ## SOPI2 (b) 0.106 0.049 2.141 0.032 0.096 0.096 ## AE1 ~ ## SOPI2 (c) 0.053 0.026 1.995 0.046 0.066 0.066 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.289 0.045 6.380 0.000 0.509 0.509 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.895 0.069 70.938 0.000 4.895 3.160 ## q19_01 4.923 0.067 73.546 0.000 4.923 3.276 ## q20_01 4.792 0.065 73.312 0.000 4.792 3.266 ## q21_01 4.925 0.062 79.810 0.000 4.925 3.555 ## q22_01 5.024 0.060 83.581 0.000 5.024 3.723 ## q23_01 5.058 0.060 84.437 0.000 5.058 3.761 ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 ## q08_04 5.357 0.064 83.896 0.000 5.357 3.737 ## q08_05 5.254 0.068 76.714 0.000 5.254 3.417 ## q09_01 3.938 0.081 48.733 0.000 3.938 2.171 ## q10_01 3.885 0.083 46.895 0.000 3.885 2.089 ## q11_01 4.248 0.080 53.325 0.000 4.248 2.375 ## q12_01 3.921 0.083 47.117 0.000 3.921 2.099 Appendix E Survey Factor Analysis 451 ## q13_01 3.921 0.086 45.466 0.000 3.921 2.025 ## q14_01 4.234 0.074 57.521 0.000 4.234 2.562 ## q17_01 3.857 0.075 51.557 0.000 3.857 2.297 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q24_01 5.175 0.066 77.907 0.000 5.175 3.470 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.373 0.102 13.395 0.000 1.373 0.572 ## q19_01 1.320 0.122 10.845 0.000 1.320 0.584 ## q20_01 0.999 0.078 12.807 0.000 0.999 0.464 ## q21_01 0.529 0.052 10.135 0.000 0.529 0.276 ## q22_01 0.489 0.047 10.513 0.000 0.489 0.268 ## q23_01 0.529 0.056 9.373 0.000 0.529 0.292 ## q08_01 0.746 0.070 10.686 0.000 0.746 0.287 ## q08_02 0.854 0.079 10.775 0.000 0.854 0.360 ## q08_03 0.952 0.096 9.916 0.000 0.952 0.382 ## q08_04 0.629 0.070 8.936 0.000 0.629 0.306 ## q08_05 0.714 0.083 8.596 0.000 0.714 0.302 ## q09_01 1.768 0.147 12.057 0.000 1.768 0.537 ## q10_01 1.370 0.140 9.783 0.000 1.370 0.396 ## q11_01 2.054 0.161 12.777 0.000 2.054 0.642 ## q12_01 1.430 0.129 11.058 0.000 1.430 0.410 ## q13_01 1.358 0.133 10.174 0.000 1.358 0.362 ## q14_01 1.715 0.146 11.759 0.000 1.715 0.628 ## q17_01 2.488 0.150 16.534 0.000 2.488 0.882 ## q30_01 0.725 0.066 10.932 0.000 0.725 0.425 ## q37_01 0.517 0.037 13.934 0.000 0.517 0.279 ## q38_01 0.463 0.042 10.975 0.000 0.463 0.235 ## q39_01 0.484 0.042 11.403 0.000 0.484 0.256 ## q40_01 0.384 0.051 7.535 0.000 0.384 0.205 ## q24_01 1.073 0.103 10.460 0.000 1.073 0.483 ## IS2 0.811 0.089 9.109 0.000 0.790 0.790 ## AE2 1.077 0.114 9.423 0.000 0.580 0.580 ## SOPI2 1.524 0.175 8.695 0.000 1.000 1.000 ## AE1 0.299 0.045 6.699 0.000 0.304 0.304 ## ## Defined Parameters: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all Appendix E Survey Factor Analysis 452 ## DirectAE1 0.106 0.049 2.141 0.032 0.096 0.096 ## DirectAE2 0.053 0.026 1.995 0.046 0.066 0.066 ## IndirectAE1 0.303 0.044 6.896 0.000 0.275 0.275 ## IndirectAE2 0.295 0.042 7.112 0.000 0.368 0.368 ## TotalAE1 0.409 0.057 7.179 0.000 0.370 0.370 ## TotalAE2 0.348 0.046 7.647 0.000 0.434 0.434 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) BFM Model SEM model <- AE2B =~ q08_01 + q08_02 + q08_03 AE1B =~ q29_01 + q30_01 + q33_01+ q35_01+ q35_02 + q35_03+ q35_04 + q36_01 + q37_01 + q38_01 + q39_01 + q40_01 SIR =~ q28_01 +q28_02 + q28_03+q28_04 Appendix E Survey Factor Analysis 453 DL =~ q25_01 +q25_02 +q25_03 +q32_01 + q32_02 + q32_03 AE1B~ SIR + DL AE2B ~ SIR +DL fit <- sem(model, d0, mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 46 iterations ## ## Number of observations 504 ## ## Estimator ML Robust ## Minimum Function Test Statistic 1416.949 877.133 ## Degrees of freedom 269 269 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.615 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 12001.381 7638.770 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.902 0.917 ## Tucker-Lewis Index (TLI) 0.891 0.908 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -17065.369 -17065.369 ## Loglikelihood unrestricted model (H1) -16356.894 -16356.894 ## ## Number of free parameters 81 81 ## Akaike (AIC) 34292.738 34292.738 ## Bayesian (BIC) 34634.766 34634.766 ## Sample-size adjusted Bayesian (BIC) 34377.665 34377.665 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.092 0.067 Appendix E Survey Factor Analysis 454 ## 90 Percent Confidence Interval 0.087 0.097 0.063 0.071 ## P-value RMSEA <= 0.05 0.000 0.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.049 0.049 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B =~ ## q08_01 1.000 1.401 0.869 ## q08_02 0.887 0.049 18.181 0.000 1.244 0.808 ## q08_03 0.861 0.045 19.280 0.000 1.207 0.764 ## AE1B =~ ## q29_01 1.000 1.046 0.774 ## q30_01 0.937 0.054 17.245 0.000 0.979 0.750 ## q33_01 1.056 0.051 20.591 0.000 1.104 0.812 ## q35_01 1.177 0.055 21.251 0.000 1.231 0.831 ## q35_02 1.182 0.057 20.565 0.000 1.236 0.853 ## q35_03 1.189 0.054 21.988 0.000 1.243 0.879 ## q35_04 1.231 0.058 21.196 0.000 1.287 0.880 ## q36_01 1.084 0.043 25.500 0.000 1.134 0.858 ## q37_01 1.070 0.053 20.307 0.000 1.119 0.822 ## q38_01 1.130 0.051 21.982 0.000 1.181 0.841 ## q39_01 1.110 0.052 21.242 0.000 1.161 0.844 ## q40_01 1.130 0.049 23.055 0.000 1.182 0.865 ## SIR =~ ## q28_01 1.000 1.083 0.785 ## q28_02 1.130 0.051 22.059 0.000 1.224 0.876 ## q28_03 1.095 0.053 20.567 0.000 1.186 0.887 ## q28_04 1.042 0.052 19.918 0.000 1.128 0.860 ## DL =~ ## q25_01 1.000 1.017 0.708 ## q25_02 1.013 0.058 17.440 0.000 1.029 0.704 ## q25_03 0.957 0.053 17.920 0.000 0.972 0.680 ## q32_01 1.275 0.070 18.203 0.000 1.296 0.867 ## q32_02 1.303 0.079 16.541 0.000 1.325 0.867 ## q32_03 1.338 0.077 17.295 0.000 1.360 0.866 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE1B ~ Appendix E Survey Factor Analysis 455 ## SIR 0.444 0.054 8.293 0.000 0.460 0.460 ## DL 0.521 0.063 8.285 0.000 0.506 0.506 ## AE2B ~ ## SIR 0.523 0.072 7.300 0.000 0.404 0.404 ## DL 0.481 0.087 5.510 0.000 0.349 0.349 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## SIR ~~ ## DL 0.633 0.089 7.147 0.000 0.575 0.575 ## AE2B ~~ ## AE1B 0.258 0.044 5.838 0.000 0.461 0.461 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 5.192 0.072 72.241 0.000 5.192 3.218 ## q08_02 5.385 0.069 78.499 0.000 5.385 3.497 ## q08_03 5.192 0.070 73.833 0.000 5.192 3.289 ## q29_01 5.312 0.060 88.254 0.000 5.312 3.931 ## q30_01 5.413 0.058 93.015 0.000 5.413 4.143 ## q33_01 5.256 0.061 86.772 0.000 5.256 3.865 ## q35_01 4.970 0.066 75.305 0.000 4.970 3.354 ## q35_02 5.065 0.065 78.458 0.000 5.065 3.495 ## q35_03 5.198 0.063 82.516 0.000 5.198 3.676 ## q35_04 5.129 0.065 78.783 0.000 5.129 3.509 ## q36_01 5.329 0.059 90.580 0.000 5.329 4.035 ## q37_01 5.331 0.061 87.937 0.000 5.331 3.917 ## q38_01 5.272 0.063 84.285 0.000 5.272 3.754 ## q39_01 5.161 0.061 84.211 0.000 5.161 3.751 ## q40_01 5.343 0.061 87.761 0.000 5.343 3.909 ## q28_01 5.266 0.061 85.641 0.000 5.266 3.815 ## q28_02 5.192 0.062 83.413 0.000 5.192 3.715 ## q28_03 5.373 0.060 90.173 0.000 5.373 4.017 ## q28_04 5.345 0.058 91.444 0.000 5.345 4.073 ## q25_01 4.877 0.064 76.279 0.000 4.877 3.398 ## q25_02 4.835 0.065 74.247 0.000 4.835 3.307 ## q25_03 4.956 0.064 77.796 0.000 4.956 3.465 ## q32_01 4.819 0.067 72.368 0.000 4.819 3.224 ## q32_02 4.851 0.068 71.303 0.000 4.851 3.176 ## q32_03 4.885 0.070 69.823 0.000 4.885 3.110 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all Appendix E Survey Factor Analysis 456 ## q08_01 0.640 0.078 8.225 0.000 0.640 0.246 ## q08_02 0.825 0.091 9.024 0.000 0.825 0.348 ## q08_03 1.036 0.117 8.856 0.000 1.036 0.416 ## q29_01 0.732 0.061 12.095 0.000 0.732 0.401 ## q30_01 0.748 0.062 12.081 0.000 0.748 0.438 ## q33_01 0.630 0.054 11.753 0.000 0.630 0.341 ## q35_01 0.680 0.043 15.762 0.000 0.680 0.310 ## q35_02 0.574 0.046 12.352 0.000 0.574 0.273 ## q35_03 0.455 0.035 12.898 0.000 0.455 0.227 ## q35_04 0.480 0.035 13.793 0.000 0.480 0.225 ## q36_01 0.460 0.028 16.636 0.000 0.460 0.263 ## q37_01 0.600 0.039 15.462 0.000 0.600 0.324 ## q38_01 0.577 0.045 12.918 0.000 0.577 0.293 ## q39_01 0.546 0.038 14.171 0.000 0.546 0.288 ## q40_01 0.471 0.050 9.468 0.000 0.471 0.252 ## q28_01 0.733 0.086 8.530 0.000 0.733 0.384 ## q28_02 0.455 0.039 11.737 0.000 0.455 0.233 ## q28_03 0.382 0.050 7.686 0.000 0.382 0.214 ## q28_04 0.449 0.045 10.053 0.000 0.449 0.261 ## q25_01 1.027 0.067 15.308 0.000 1.027 0.498 ## q25_02 1.078 0.079 13.686 0.000 1.078 0.504 ## q25_03 1.100 0.047 23.492 0.000 1.100 0.538 ## q32_01 0.555 0.047 11.734 0.000 0.555 0.248 ## q32_02 0.579 0.067 8.626 0.000 0.579 0.248 ## q32_03 0.616 0.055 11.256 0.000 0.616 0.250 ## AE2B 1.087 0.121 8.967 0.000 0.554 0.554 ## AE1B 0.289 0.037 7.728 0.000 0.264 0.264 ## SIR 1.173 0.126 9.284 0.000 1.000 1.000 ## DL 1.033 0.121 8.554 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.754 ## q08_02 0.652 ## q08_03 0.584 ## q29_01 0.599 ## q30_01 0.562 ## q33_01 0.659 ## q35_01 0.690 ## q35_02 0.727 ## q35_03 0.773 ## q35_04 0.775 ## q36_01 0.737 ## q37_01 0.676 ## q38_01 0.707 ## q39_01 0.712 ## q40_01 0.748 Appendix E Survey Factor Analysis 457 ## q28_01 0.616 ## q28_02 0.767 ## q28_03 0.786 ## q28_04 0.739 ## q25_01 0.502 ## q25_02 0.496 ## q25_03 0.462 ## q32_01 0.752 ## q32_02 0.752 ## q32_03 0.750 ## AE2B 0.446 ## AE1B 0.736 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 458 Appendix E4 Stratified SEM Factor Analysis Stratified SEM Assessment AZM Model Stratified SEM 3 High-Mgmt fit <- sem(model, d0[d0$Man_Level == "High level management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 43 iterations ## ## Number of observations 188 ## ## Estimator ML Robust ## Minimum Function Test Statistic 474.427 380.691 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.246 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 2652.296 2131.255 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.905 0.928 ## Tucker-Lewis Index (TLI) 0.894 0.920 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -6654.078 -6654.078 ## Loglikelihood unrestricted model (H1) -6416.864 -6416.864 ## ## Number of free parameters 76 76 ## Akaike (AIC) 13460.156 13460.156 ## Bayesian (BIC) 13706.125 13706.125 ## Sample-size adjusted Bayesian (BIC) 13465.397 13465.397 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.070 0.053 ## 90 Percent Confidence Interval 0.060 0.079 0.044 0.063 Appendix E Survey Factor Analysis 459 ## P-value RMSEA <= 0.05 0.001 0.275 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.069 0.069 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 0.733 0.507 ## q19_01 1.075 0.191 5.633 0.000 0.789 0.586 ## q20_01 1.401 0.200 6.994 0.000 1.028 0.761 ## q21_01 1.383 0.222 6.227 0.000 1.014 0.788 ## q22_01 1.256 0.187 6.703 0.000 0.921 0.797 ## q23_01 1.344 0.191 7.053 0.000 0.986 0.809 ## AE2 =~ ## q08_01 1.000 1.040 0.801 ## q08_02 0.878 0.102 8.572 0.000 0.914 0.763 ## q08_03 0.853 0.074 11.467 0.000 0.887 0.730 ## q08_04 0.798 0.072 11.147 0.000 0.830 0.784 ## q08_05 0.978 0.089 11.025 0.000 1.018 0.830 ## SOPI2 =~ ## q09_01 1.000 1.247 0.661 ## q10_01 1.285 0.116 11.102 0.000 1.603 0.819 ## q11_01 1.117 0.112 10.018 0.000 1.393 0.732 ## q12_01 1.222 0.120 10.180 0.000 1.524 0.775 ## q13_01 1.303 0.117 11.091 0.000 1.624 0.805 ## q14_01 0.752 0.093 8.108 0.000 0.938 0.558 ## q17_01 0.587 0.107 5.483 0.000 0.732 0.389 ## AE1 =~ ## q30_01 1.000 0.726 0.717 ## q37_01 1.123 0.084 13.334 0.000 0.816 0.779 ## q38_01 0.987 0.096 10.249 0.000 0.717 0.724 ## q39_01 1.094 0.098 11.187 0.000 0.794 0.785 ## q40_01 1.113 0.094 11.896 0.000 0.808 0.775 ## q24_01 0.935 0.099 9.412 0.000 0.679 0.619 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.255 0.067 3.832 0.000 0.434 0.434 ## AE2 ~ Appendix E Survey Factor Analysis 460 ## IS2 0.650 0.141 4.618 0.000 0.458 0.458 ## AE1 ~ ## IS2 0.749 0.130 5.753 0.000 0.757 0.757 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.221 0.042 5.238 0.000 0.503 0.503 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 5.250 0.105 49.772 0.000 5.250 3.630 ## q19_01 5.314 0.098 54.140 0.000 5.314 3.949 ## q20_01 5.186 0.098 52.685 0.000 5.186 3.842 ## q21_01 5.356 0.094 57.091 0.000 5.356 4.164 ## q22_01 5.479 0.084 65.024 0.000 5.479 4.742 ## q23_01 5.468 0.089 61.566 0.000 5.468 4.490 ## q08_01 5.824 0.095 61.481 0.000 5.824 4.484 ## q08_02 5.968 0.087 68.307 0.000 5.968 4.982 ## q08_03 5.867 0.089 66.192 0.000 5.867 4.828 ## q08_04 5.984 0.077 77.457 0.000 5.984 5.649 ## q08_05 5.862 0.089 65.576 0.000 5.862 4.783 ## q09_01 4.436 0.138 32.263 0.000 4.436 2.353 ## q10_01 4.287 0.143 30.028 0.000 4.287 2.190 ## q11_01 4.527 0.139 32.614 0.000 4.527 2.379 ## q12_01 4.340 0.143 30.280 0.000 4.340 2.208 ## q13_01 4.303 0.147 29.236 0.000 4.303 2.132 ## q14_01 4.590 0.123 37.435 0.000 4.590 2.730 ## q17_01 4.048 0.137 29.476 0.000 4.048 2.150 ## q30_01 5.915 0.074 80.119 0.000 5.915 5.843 ## q37_01 5.878 0.076 76.954 0.000 5.878 5.612 ## q38_01 5.878 0.072 81.421 0.000 5.878 5.938 ## q39_01 5.777 0.074 78.250 0.000 5.777 5.707 ## q40_01 5.862 0.076 77.079 0.000 5.862 5.622 ## q24_01 5.904 0.080 73.791 0.000 5.904 5.382 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.554 0.135 11.479 0.000 1.554 0.743 ## q19_01 1.189 0.146 8.132 0.000 1.189 0.656 ## q20_01 0.766 0.101 7.556 0.000 0.766 0.420 ## q21_01 0.626 0.064 9.711 0.000 0.626 0.378 ## q22_01 0.486 0.058 8.370 0.000 0.486 0.364 Appendix E Survey Factor Analysis 461 ## q23_01 0.511 0.075 6.775 0.000 0.511 0.345 ## q08_01 0.605 0.086 7.004 0.000 0.605 0.358 ## q08_02 0.600 0.080 7.541 0.000 0.600 0.418 ## q08_03 0.690 0.099 6.973 0.000 0.690 0.467 ## q08_04 0.433 0.087 4.987 0.000 0.433 0.386 ## q08_05 0.467 0.075 6.251 0.000 0.467 0.311 ## q09_01 2.000 0.194 10.309 0.000 2.000 0.563 ## q10_01 1.264 0.155 8.172 0.000 1.264 0.330 ## q11_01 1.681 0.202 8.322 0.000 1.681 0.464 ## q12_01 1.540 0.220 7.006 0.000 1.540 0.399 ## q13_01 1.435 0.154 9.296 0.000 1.435 0.352 ## q14_01 1.947 0.198 9.816 0.000 1.947 0.689 ## q17_01 3.010 0.280 10.764 0.000 3.010 0.849 ## q30_01 0.497 0.062 8.041 0.000 0.497 0.485 ## q37_01 0.431 0.043 9.945 0.000 0.431 0.393 ## q38_01 0.466 0.047 9.856 0.000 0.466 0.475 ## q39_01 0.393 0.041 9.553 0.000 0.393 0.384 ## q40_01 0.434 0.080 5.447 0.000 0.434 0.399 ## q24_01 0.742 0.129 5.773 0.000 0.742 0.617 ## IS2 0.437 0.123 3.541 0.000 0.812 0.812 ## AE2 0.855 0.194 4.399 0.000 0.790 0.790 ## SOPI2 1.555 0.259 6.012 0.000 1.000 1.000 ## AE1 0.225 0.050 4.484 0.000 0.427 0.427 ## ## R-Square: ## Estimate ## q15_01 0.257 ## q19_01 0.344 ## q20_01 0.580 ## q21_01 0.622 ## q22_01 0.636 ## q23_01 0.655 ## q08_01 0.642 ## q08_02 0.582 ## q08_03 0.533 ## q08_04 0.614 ## q08_05 0.689 ## q09_01 0.437 ## q10_01 0.670 ## q11_01 0.536 ## q12_01 0.601 ## q13_01 0.648 ## q14_01 0.311 ## q17_01 0.151 ## q30_01 0.515 ## q37_01 0.607 ## q38_01 0.525 Appendix E Survey Factor Analysis 462 ## q39_01 0.616 ## q40_01 0.601 ## q24_01 0.383 ## IS2 0.188 ## AE2 0.210 ## AE1 0.573 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) AZM Model Stratified SEM 4 Mid-Mgmt fit <- sem(model, d0[d0$Man_Level == "Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 44 iterations ## Appendix E Survey Factor Analysis 463 ## Number of observations 141 ## ## Estimator ML Robust ## Minimum Function Test Statistic 451.992 353.145 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.280 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 2243.759 1770.803 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.896 0.930 ## Tucker-Lewis Index (TLI) 0.885 0.922 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -5049.880 -5049.880 ## Loglikelihood unrestricted model (H1) -4823.884 -4823.884 ## ## Number of free parameters 76 76 ## Akaike (AIC) 10251.759 10251.759 ## Bayesian (BIC) 10475.865 10475.865 ## Sample-size adjusted Bayesian (BIC) 10235.403 10235.403 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.076 0.055 ## 90 Percent Confidence Interval 0.065 0.087 0.043 0.066 ## P-value RMSEA <= 0.05 0.000 0.241 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.072 0.072 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all Appendix E Survey Factor Analysis 464 ## IS2 =~ ## q15_01 1.000 1.022 0.708 ## q19_01 0.888 0.130 6.848 0.000 0.907 0.651 ## q20_01 1.017 0.099 10.238 0.000 1.039 0.734 ## q21_01 1.071 0.086 12.435 0.000 1.094 0.845 ## q22_01 1.110 0.087 12.786 0.000 1.134 0.880 ## q23_01 0.985 0.093 10.591 0.000 1.006 0.781 ## AE2 =~ ## q08_01 1.000 1.257 0.847 ## q08_02 0.931 0.059 15.814 0.000 1.170 0.817 ## q08_03 0.760 0.078 9.781 0.000 0.956 0.674 ## q08_04 0.765 0.061 12.625 0.000 0.962 0.769 ## q08_05 0.850 0.061 13.945 0.000 1.068 0.792 ## SOPI2 =~ ## q09_01 1.000 1.036 0.660 ## q10_01 1.042 0.133 7.814 0.000 1.080 0.639 ## q11_01 0.780 0.140 5.571 0.000 0.808 0.521 ## q12_01 1.249 0.143 8.734 0.000 1.294 0.742 ## q13_01 1.323 0.142 9.302 0.000 1.371 0.781 ## q14_01 0.810 0.121 6.676 0.000 0.839 0.563 ## q17_01 0.521 0.119 4.393 0.000 0.540 0.374 ## AE1 =~ ## q30_01 1.000 0.826 0.689 ## q37_01 1.194 0.125 9.528 0.000 0.986 0.795 ## q38_01 1.328 0.170 7.828 0.000 1.096 0.844 ## q39_01 1.320 0.168 7.873 0.000 1.090 0.832 ## q40_01 1.262 0.127 9.913 0.000 1.042 0.847 ## q24_01 1.011 0.142 7.127 0.000 0.835 0.655 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.397 0.125 3.174 0.002 0.402 0.402 ## AE2 ~ ## IS2 0.730 0.091 8.013 0.000 0.593 0.593 ## AE1 ~ ## IS2 0.683 0.100 6.841 0.000 0.845 0.845 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.197 0.052 3.801 0.000 0.441 0.441 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.943 0.122 40.681 0.000 4.943 3.426 ## q19_01 4.851 0.117 41.332 0.000 4.851 3.481 Appendix E Survey Factor Analysis 465 ## q20_01 4.773 0.119 40.024 0.000 4.773 3.371 ## q21_01 4.823 0.109 44.211 0.000 4.823 3.723 ## q22_01 4.865 0.109 44.803 0.000 4.865 3.773 ## q23_01 5.028 0.108 46.360 0.000 5.028 3.904 ## q08_01 5.284 0.125 42.269 0.000 5.284 3.560 ## q08_02 5.326 0.121 44.177 0.000 5.326 3.720 ## q08_03 5.206 0.119 43.629 0.000 5.206 3.674 ## q08_04 5.433 0.105 51.572 0.000 5.433 4.343 ## q08_05 5.319 0.114 46.817 0.000 5.319 3.943 ## q09_01 4.184 0.132 31.663 0.000 4.184 2.667 ## q10_01 4.170 0.142 29.330 0.000 4.170 2.470 ## q11_01 4.383 0.131 33.549 0.000 4.383 2.825 ## q12_01 4.142 0.147 28.190 0.000 4.142 2.374 ## q13_01 4.277 0.148 28.938 0.000 4.277 2.437 ## q14_01 4.447 0.125 35.449 0.000 4.447 2.985 ## q17_01 3.801 0.122 31.237 0.000 3.801 2.631 ## q30_01 5.418 0.101 53.713 0.000 5.418 4.523 ## q37_01 5.291 0.104 50.633 0.000 5.291 4.264 ## q38_01 5.270 0.109 48.186 0.000 5.270 4.058 ## q39_01 5.191 0.110 47.071 0.000 5.191 3.964 ## q40_01 5.383 0.104 51.967 0.000 5.383 4.376 ## q24_01 5.142 0.107 47.892 0.000 5.142 4.033 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.038 0.146 7.125 0.000 1.038 0.499 ## q19_01 1.120 0.163 6.866 0.000 1.120 0.577 ## q20_01 0.926 0.101 9.196 0.000 0.926 0.462 ## q21_01 0.481 0.063 7.652 0.000 0.481 0.287 ## q22_01 0.376 0.055 6.864 0.000 0.376 0.226 ## q23_01 0.646 0.072 8.924 0.000 0.646 0.390 ## q08_01 0.623 0.098 6.343 0.000 0.623 0.283 ## q08_02 0.680 0.102 6.655 0.000 0.680 0.332 ## q08_03 1.094 0.163 6.720 0.000 1.094 0.545 ## q08_04 0.640 0.079 8.052 0.000 0.640 0.409 ## q08_05 0.678 0.104 6.543 0.000 0.678 0.373 ## q09_01 1.389 0.122 11.339 0.000 1.389 0.564 ## q10_01 1.685 0.250 6.753 0.000 1.685 0.591 ## q11_01 1.754 0.196 8.926 0.000 1.754 0.729 ## q12_01 1.369 0.177 7.718 0.000 1.369 0.450 ## q13_01 1.200 0.183 6.555 0.000 1.200 0.390 ## q14_01 1.514 0.172 8.788 0.000 1.514 0.682 ## q17_01 1.796 0.156 11.529 0.000 1.796 0.860 Appendix E Survey Factor Analysis 466 ## q30_01 0.753 0.077 9.761 0.000 0.753 0.525 ## q37_01 0.568 0.060 9.525 0.000 0.568 0.369 ## q38_01 0.484 0.053 9.145 0.000 0.484 0.287 ## q39_01 0.527 0.085 6.210 0.000 0.527 0.307 ## q40_01 0.428 0.053 8.125 0.000 0.428 0.283 ## q24_01 0.928 0.106 8.735 0.000 0.928 0.571 ## IS2 0.875 0.173 5.058 0.000 0.838 0.838 ## AE2 1.024 0.165 6.186 0.000 0.648 0.648 ## SOPI2 1.074 0.224 4.784 0.000 1.000 1.000 ## AE1 0.195 0.058 3.363 0.001 0.286 0.286 ## ## R-Square: ## Estimate ## q15_01 0.501 ## q19_01 0.423 ## q20_01 0.538 ## q21_01 0.713 ## q22_01 0.774 ## q23_01 0.610 ## q08_01 0.717 ## q08_02 0.668 ## q08_03 0.455 ## q08_04 0.591 ## q08_05 0.627 ## q09_01 0.436 ## q10_01 0.409 ## q11_01 0.271 ## q12_01 0.550 ## q13_01 0.610 ## q14_01 0.318 ## q17_01 0.140 ## q30_01 0.475 ## q37_01 0.631 ## q38_01 0.713 ## q39_01 0.693 ## q40_01 0.717 ## q24_01 0.429 ## IS2 0.162 ## AE2 0.352 ## AE1 0.714 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 467 AZM Model Stratified SEM 5 Non-Mgmt fit <- sem(model, d0[d0$Man_Level == "Non management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 48 iterations ## ## Number of observations 175 ## ## Estimator ML Robust ## Minimum Function Test Statistic 482.489 377.955 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.277 ## for the Satorra-Bentler correction (Mplus variant) Appendix E Survey Factor Analysis 468 ## ## Model test baseline model: ## ## Minimum Function Test Statistic 3085.963 2457.755 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.917 0.940 ## Tucker-Lewis Index (TLI) 0.907 0.934 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -6620.480 -6620.480 ## Loglikelihood unrestricted model (H1) -6379.236 -6379.236 ## ## Number of free parameters 76 76 ## Akaike (AIC) 13392.960 13392.960 ## Bayesian (BIC) 13633.484 13633.484 ## Sample-size adjusted Bayesian (BIC) 13392.816 13392.816 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.074 0.055 ## 90 Percent Confidence Interval 0.064 0.083 0.045 0.064 ## P-value RMSEA <= 0.05 0.000 0.211 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.086 0.086 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.126 0.689 ## q19_01 0.913 0.094 9.690 0.000 1.028 0.626 ## q20_01 0.910 0.088 10.352 0.000 1.025 0.677 ## q21_01 1.125 0.086 13.135 0.000 1.267 0.887 ## q22_01 1.093 0.089 12.332 0.000 1.230 0.849 ## q23_01 1.084 0.087 12.517 0.000 1.221 0.881 ## AE2 =~ Appendix E Survey Factor Analysis 469 ## q08_01 1.000 1.408 0.826 ## q08_02 0.939 0.064 14.573 0.000 1.323 0.772 ## q08_03 0.951 0.055 17.364 0.000 1.339 0.780 ## q08_04 0.911 0.071 12.784 0.000 1.282 0.813 ## q08_05 0.949 0.074 12.761 0.000 1.336 0.794 ## SOPI2 =~ ## q09_01 1.000 1.031 0.613 ## q10_01 1.298 0.138 9.426 0.000 1.338 0.790 ## q11_01 0.758 0.147 5.151 0.000 0.782 0.443 ## q12_01 1.144 0.144 7.962 0.000 1.180 0.702 ## q13_01 1.294 0.149 8.661 0.000 1.334 0.747 ## q14_01 0.934 0.123 7.619 0.000 0.963 0.602 ## q17_01 0.339 0.141 2.401 0.016 0.349 0.218 ## AE1 =~ ## q30_01 1.000 1.073 0.742 ## q37_01 1.231 0.088 13.962 0.000 1.321 0.874 ## q38_01 1.322 0.098 13.547 0.000 1.419 0.906 ## q39_01 1.167 0.097 11.986 0.000 1.253 0.867 ## q40_01 1.343 0.090 14.864 0.000 1.441 0.939 ## q24_01 1.026 0.099 10.379 0.000 1.101 0.676 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.450 0.104 4.310 0.000 0.412 0.412 ## AE2 ~ ## IS2 0.849 0.084 10.075 0.000 0.679 0.679 ## AE1 ~ ## IS2 0.797 0.088 9.009 0.000 0.836 0.836 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.254 0.072 3.508 0.000 0.416 0.416 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.474 0.124 36.208 0.000 4.474 2.737 ## q19_01 4.560 0.124 36.765 0.000 4.560 2.779 ## q20_01 4.383 0.114 38.286 0.000 4.383 2.894 ## q21_01 4.543 0.108 42.051 0.000 4.543 3.179 ## q22_01 4.663 0.109 42.589 0.000 4.663 3.219 ## q23_01 4.640 0.105 44.277 0.000 4.640 3.347 ## q08_01 4.440 0.129 34.436 0.000 4.440 2.603 ## q08_02 4.806 0.129 37.117 0.000 4.806 2.806 ## q08_03 4.457 0.130 34.357 0.000 4.457 2.597 ## q08_04 4.623 0.119 38.784 0.000 4.623 2.932 Appendix E Survey Factor Analysis 470 ## q08_05 4.549 0.127 35.765 0.000 4.549 2.704 ## q09_01 3.206 0.127 25.225 0.000 3.206 1.907 ## q10_01 3.223 0.128 25.165 0.000 3.223 1.902 ## q11_01 3.840 0.133 28.772 0.000 3.840 2.175 ## q12_01 3.291 0.127 25.912 0.000 3.291 1.959 ## q13_01 3.223 0.135 23.869 0.000 3.223 1.804 ## q14_01 3.680 0.121 30.423 0.000 3.680 2.300 ## q17_01 3.697 0.121 30.536 0.000 3.697 2.308 ## q30_01 4.869 0.109 44.536 0.000 4.869 3.367 ## q37_01 4.777 0.114 41.784 0.000 4.777 3.159 ## q38_01 4.623 0.118 39.054 0.000 4.623 2.952 ## q39_01 4.474 0.109 40.949 0.000 4.474 3.095 ## q40_01 4.754 0.116 40.968 0.000 4.754 3.097 ## q24_01 4.417 0.123 35.864 0.000 4.417 2.711 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.404 0.187 7.501 0.000 1.404 0.526 ## q19_01 1.636 0.182 9.009 0.000 1.636 0.608 ## q20_01 1.244 0.126 9.853 0.000 1.244 0.542 ## q21_01 0.437 0.105 4.148 0.000 0.437 0.214 ## q22_01 0.585 0.088 6.635 0.000 0.585 0.279 ## q23_01 0.431 0.064 6.685 0.000 0.431 0.224 ## q08_01 0.927 0.132 7.044 0.000 0.927 0.318 ## q08_02 1.184 0.173 6.831 0.000 1.184 0.404 ## q08_03 1.153 0.162 7.128 0.000 1.153 0.391 ## q08_04 0.842 0.130 6.469 0.000 0.842 0.339 ## q08_05 1.047 0.133 7.897 0.000 1.047 0.370 ## q09_01 1.763 0.239 7.376 0.000 1.763 0.624 ## q10_01 1.080 0.144 7.513 0.000 1.080 0.376 ## q11_01 2.506 0.278 9.028 0.000 2.506 0.804 ## q12_01 1.432 0.185 7.752 0.000 1.432 0.507 ## q13_01 1.411 0.177 7.952 0.000 1.411 0.442 ## q14_01 1.633 0.212 7.715 0.000 1.633 0.638 ## q17_01 2.443 0.216 11.300 0.000 2.443 0.952 ## q30_01 0.939 0.102 9.173 0.000 0.939 0.449 ## q37_01 0.541 0.064 8.426 0.000 0.541 0.237 ## q38_01 0.439 0.069 6.400 0.000 0.439 0.179 ## q39_01 0.520 0.060 8.602 0.000 0.520 0.249 ## q40_01 0.279 0.045 6.222 0.000 0.279 0.119 ## q24_01 1.442 0.200 7.226 0.000 1.442 0.543 ## IS2 1.052 0.147 7.171 0.000 0.830 0.830 ## AE2 1.068 0.180 5.941 0.000 0.539 0.539 Appendix E Survey Factor Analysis 471 ## SOPI2 1.063 0.220 4.821 0.000 1.000 1.000 ## AE1 0.348 0.078 4.456 0.000 0.302 0.302 ## ## R-Square: ## Estimate ## q15_01 0.474 ## q19_01 0.392 ## q20_01 0.458 ## q21_01 0.786 ## q22_01 0.721 ## q23_01 0.776 ## q08_01 0.682 ## q08_02 0.596 ## q08_03 0.609 ## q08_04 0.661 ## q08_05 0.630 ## q09_01 0.376 ## q10_01 0.624 ## q11_01 0.196 ## q12_01 0.493 ## q13_01 0.558 ## q14_01 0.362 ## q17_01 0.048 ## q30_01 0.551 ## q37_01 0.763 ## q38_01 0.821 ## q39_01 0.751 ## q40_01 0.881 ## q24_01 0.457 ## IS2 0.170 ## AE2 0.461 ## AE1 0.698 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 472 AZM Model Stratified SEM 6 High/Mid-Mgmt High level management+Mid management fit <- sem(model, d0[d0$Man_Level == "High level management" | d0$Man_Level =="Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 43 iterations ## ## Number of observations 329 ## ## Estimator ML Robust ## Minimum Function Test Statistic 510.760 375.664 ## Degrees of freedom 248 248 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.360 Appendix E Survey Factor Analysis 473 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 4702.646 3513.228 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.941 0.961 ## Tucker-Lewis Index (TLI) 0.934 0.956 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -11779.708 -11779.708 ## Loglikelihood unrestricted model (H1) -11524.328 -11524.328 ## ## Number of free parameters 76 76 ## Akaike (AIC) 23711.416 23711.416 ## Bayesian (BIC) 23999.916 23999.916 ## Sample-size adjusted Bayesian (BIC) 23758.845 23758.845 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.057 0.040 ## 90 Percent Confidence Interval 0.050 0.064 0.032 0.046 ## P-value RMSEA <= 0.05 0.056 0.995 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.048 0.048 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 0.867 0.597 ## q19_01 1.008 0.100 10.047 0.000 0.874 0.631 ## q20_01 1.201 0.110 10.947 0.000 1.041 0.747 ## q21_01 1.252 0.107 11.732 0.000 1.086 0.824 ## q22_01 1.228 0.104 11.793 0.000 1.065 0.850 ## q23_01 1.163 0.104 11.216 0.000 1.008 0.795 Appendix E Survey Factor Analysis 474 ## AE2 =~ ## q08_01 1.000 1.158 0.823 ## q08_02 0.926 0.058 15.879 0.000 1.072 0.799 ## q08_03 0.848 0.055 15.526 0.000 0.982 0.729 ## q08_04 0.806 0.048 16.859 0.000 0.934 0.793 ## q08_05 0.933 0.050 18.626 0.000 1.081 0.826 ## SOPI2 =~ ## q09_01 1.000 1.175 0.667 ## q10_01 1.176 0.097 12.147 0.000 1.382 0.748 ## q11_01 0.981 0.094 10.427 0.000 1.153 0.654 ## q12_01 1.220 0.099 12.347 0.000 1.434 0.764 ## q13_01 1.298 0.099 13.115 0.000 1.525 0.798 ## q14_01 0.775 0.074 10.419 0.000 0.911 0.568 ## q17_01 0.578 0.085 6.818 0.000 0.678 0.396 ## AE1 =~ ## q30_01 1.000 0.805 0.717 ## q37_01 1.163 0.078 14.825 0.000 0.936 0.799 ## q38_01 1.173 0.097 12.055 0.000 0.944 0.806 ## q39_01 1.215 0.100 12.144 0.000 0.978 0.825 ## q40_01 1.172 0.080 14.698 0.000 0.943 0.819 ## q24_01 1.033 0.095 10.867 0.000 0.831 0.673 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 0.305 0.060 5.051 0.000 0.413 0.413 ## AE2 ~ ## IS2 0.737 0.093 7.907 0.000 0.552 0.552 ## AE1 ~ ## IS2 0.764 0.086 8.872 0.000 0.823 0.823 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.215 0.038 5.654 0.000 0.487 0.487 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 5.119 0.080 63.906 0.000 5.119 3.523 ## q19_01 5.116 0.076 66.967 0.000 5.116 3.692 ## q20_01 5.009 0.077 65.195 0.000 5.009 3.594 ## q21_01 5.128 0.073 70.621 0.000 5.128 3.893 ## q22_01 5.216 0.069 75.566 0.000 5.216 4.166 ## q23_01 5.280 0.070 75.574 0.000 5.280 4.167 ## q08_01 5.593 0.078 72.092 0.000 5.593 3.975 ## q08_02 5.693 0.074 76.980 0.000 5.693 4.244 ## q08_03 5.584 0.074 75.250 0.000 5.584 4.149 Appendix E Survey Factor Analysis 475 ## q08_04 5.748 0.065 88.548 0.000 5.748 4.882 ## q08_05 5.629 0.072 78.069 0.000 5.629 4.304 ## q09_01 4.328 0.097 44.575 0.000 4.328 2.458 ## q10_01 4.237 0.102 41.589 0.000 4.237 2.293 ## q11_01 4.465 0.097 45.955 0.000 4.465 2.534 ## q12_01 4.255 0.103 41.131 0.000 4.255 2.268 ## q13_01 4.292 0.105 40.761 0.000 4.292 2.247 ## q14_01 4.529 0.088 51.228 0.000 4.529 2.824 ## q17_01 3.942 0.094 41.732 0.000 3.942 2.301 ## q30_01 5.702 0.062 92.110 0.000 5.702 5.078 ## q37_01 5.626 0.065 87.159 0.000 5.626 4.805 ## q38_01 5.617 0.065 86.947 0.000 5.617 4.794 ## q39_01 5.526 0.065 84.578 0.000 5.526 4.663 ## q40_01 5.657 0.063 89.106 0.000 5.657 4.913 ## q24_01 5.578 0.068 81.878 0.000 5.578 4.514 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.359 0.111 12.212 0.000 1.359 0.644 ## q19_01 1.156 0.130 8.925 0.000 1.156 0.602 ## q20_01 0.859 0.081 10.665 0.000 0.859 0.442 ## q21_01 0.556 0.048 11.592 0.000 0.556 0.320 ## q22_01 0.434 0.045 9.578 0.000 0.434 0.277 ## q23_01 0.590 0.070 8.482 0.000 0.590 0.367 ## q08_01 0.639 0.066 9.617 0.000 0.639 0.323 ## q08_02 0.651 0.065 9.953 0.000 0.651 0.362 ## q08_03 0.848 0.106 8.037 0.000 0.848 0.468 ## q08_04 0.514 0.067 7.682 0.000 0.514 0.371 ## q08_05 0.543 0.082 6.658 0.000 0.543 0.317 ## q09_01 1.722 0.155 11.073 0.000 1.722 0.555 ## q10_01 1.505 0.170 8.829 0.000 1.505 0.441 ## q11_01 1.777 0.183 9.688 0.000 1.777 0.572 ## q12_01 1.466 0.157 9.360 0.000 1.466 0.416 ## q13_01 1.323 0.142 9.320 0.000 1.323 0.363 ## q14_01 1.742 0.172 10.150 0.000 1.742 0.678 ## q17_01 2.476 0.178 13.923 0.000 2.476 0.843 ## q30_01 0.613 0.056 10.909 0.000 0.613 0.487 ## q37_01 0.495 0.039 12.626 0.000 0.495 0.361 ## q38_01 0.482 0.040 11.914 0.000 0.482 0.351 ## q39_01 0.449 0.046 9.829 0.000 0.449 0.320 ## q40_01 0.437 0.065 6.666 0.000 0.437 0.329 ## q24_01 0.836 0.097 8.654 0.000 0.836 0.547 ## IS2 0.623 0.111 5.612 0.000 0.829 0.829 Appendix E Survey Factor Analysis 476 ## AE2 0.933 0.130 7.157 0.000 0.696 0.696 ## SOPI2 1.380 0.195 7.095 0.000 1.000 1.000 ## AE1 0.209 0.041 5.095 0.000 0.323 0.323 ## ## R-Square: ## Estimate ## q15_01 0.356 ## q19_01 0.398 ## q20_01 0.558 ## q21_01 0.680 ## q22_01 0.723 ## q23_01 0.633 ## q08_01 0.677 ## q08_02 0.638 ## q08_03 0.532 ## q08_04 0.629 ## q08_05 0.683 ## q09_01 0.445 ## q10_01 0.559 ## q11_01 0.428 ## q12_01 0.584 ## q13_01 0.637 ## q14_01 0.322 ## q17_01 0.157 ## q30_01 0.513 ## q37_01 0.639 ## q38_01 0.649 ## q39_01 0.680 ## q40_01 0.671 ## q24_01 0.453 ## IS2 0.171 ## AE2 0.304 ## AE1 0.677 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 477 AZM Model Stratified Mediation Effect SEM 7 High-Mgmt fit <- sem(model, d0[d0$Man_Level == "High level management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 44 iterations ## ## Number of observations 188 ## ## Estimator ML Robust ## Minimum Function Test Statistic 470.818 375.796 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.253 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: Appendix E Survey Factor Analysis 478 ## ## Minimum Function Test Statistic 2652.296 2131.255 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.905 0.930 ## Tucker-Lewis Index (TLI) 0.894 0.922 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -6652.273 -6652.273 ## Loglikelihood unrestricted model (H1) -6416.864 -6416.864 ## ## Number of free parameters 78 78 ## Akaike (AIC) 13460.546 13460.546 ## Bayesian (BIC) 13712.989 13712.989 ## Sample-size adjusted Bayesian (BIC) 13465.926 13465.926 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.070 0.053 ## 90 Percent Confidence Interval 0.060 0.079 0.043 0.062 ## P-value RMSEA <= 0.05 0.001 0.298 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.064 0.064 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 0.737 0.509 ## q19_01 1.071 0.188 5.686 0.000 0.789 0.587 ## q20_01 1.391 0.197 7.056 0.000 1.025 0.759 ## q21_01 1.370 0.218 6.272 0.000 1.010 0.785 ## q22_01 1.251 0.185 6.774 0.000 0.921 0.798 ## q23_01 1.336 0.188 7.098 0.000 0.985 0.809 ## AE2 =~ ## q08_01 1.000 1.040 0.801 ## q08_02 0.880 0.104 8.447 0.000 0.915 0.764 Appendix E Survey Factor Analysis 479 ## q08_03 0.854 0.075 11.344 0.000 0.888 0.730 ## q08_04 0.799 0.072 11.068 0.000 0.830 0.784 ## q08_05 0.978 0.088 11.140 0.000 1.017 0.830 ## SOPI2 =~ ## q09_01 1.000 1.248 0.662 ## q10_01 1.279 0.113 11.283 0.000 1.596 0.815 ## q11_01 1.117 0.108 10.378 0.000 1.394 0.733 ## q12_01 1.220 0.118 10.306 0.000 1.522 0.774 ## q13_01 1.305 0.116 11.287 0.000 1.629 0.807 ## q14_01 0.755 0.089 8.454 0.000 0.942 0.560 ## q17_01 0.588 0.106 5.547 0.000 0.733 0.389 ## AE1 =~ ## q30_01 1.000 0.726 0.717 ## q37_01 1.126 0.083 13.554 0.000 0.817 0.780 ## q38_01 0.987 0.094 10.487 0.000 0.717 0.724 ## q39_01 1.094 0.095 11.565 0.000 0.794 0.785 ## q40_01 1.112 0.092 12.033 0.000 0.807 0.774 ## q24_01 0.934 0.098 9.577 0.000 0.678 0.618 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 (a) 0.267 0.068 3.934 0.000 0.452 0.452 ## AE2 ~ ## IS2 (d) 0.721 0.148 4.869 0.000 0.511 0.511 ## AE1 ~ ## IS2 (e) 0.811 0.135 6.011 0.000 0.823 0.823 ## AE2 ~ ## SOPI2 (b) -0.087 0.055 -1.578 0.115 -0.105 -0.105 ## AE1 ~ ## SOPI2 (c) -0.077 0.032 -2.416 0.016 -0.132 -0.132 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.207 0.042 4.903 0.000 0.491 0.491 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 5.250 0.105 49.772 0.000 5.250 3.630 ## q19_01 5.314 0.098 54.140 0.000 5.314 3.949 ## q20_01 5.186 0.098 52.685 0.000 5.186 3.842 ## q21_01 5.356 0.094 57.091 0.000 5.356 4.164 ## q22_01 5.479 0.084 65.024 0.000 5.479 4.742 ## q23_01 5.468 0.089 61.566 0.000 5.468 4.490 ## q08_01 5.824 0.095 61.481 0.000 5.824 4.484 ## q08_02 5.968 0.087 68.307 0.000 5.968 4.982 Appendix E Survey Factor Analysis 480 ## q08_03 5.867 0.089 66.192 0.000 5.867 4.828 ## q08_04 5.984 0.077 77.457 0.000 5.984 5.649 ## q08_05 5.862 0.089 65.576 0.000 5.862 4.783 ## q09_01 4.436 0.138 32.263 0.000 4.436 2.353 ## q10_01 4.287 0.143 30.028 0.000 4.287 2.190 ## q11_01 4.527 0.139 32.614 0.000 4.527 2.379 ## q12_01 4.340 0.143 30.280 0.000 4.340 2.208 ## q13_01 4.303 0.147 29.236 0.000 4.303 2.132 ## q14_01 4.590 0.123 37.435 0.000 4.590 2.730 ## q17_01 4.048 0.137 29.476 0.000 4.048 2.150 ## q30_01 5.915 0.074 80.119 0.000 5.915 5.843 ## q37_01 5.878 0.076 76.954 0.000 5.878 5.612 ## q38_01 5.878 0.072 81.421 0.000 5.878 5.938 ## q39_01 5.777 0.074 78.250 0.000 5.777 5.707 ## q40_01 5.862 0.076 77.079 0.000 5.862 5.622 ## q24_01 5.904 0.080 73.791 0.000 5.904 5.382 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.549 0.135 11.475 0.000 1.549 0.740 ## q19_01 1.188 0.146 8.117 0.000 1.188 0.656 ## q20_01 0.771 0.101 7.607 0.000 0.771 0.423 ## q21_01 0.635 0.065 9.813 0.000 0.635 0.384 ## q22_01 0.486 0.058 8.365 0.000 0.486 0.364 ## q23_01 0.513 0.075 6.825 0.000 0.513 0.346 ## q08_01 0.606 0.087 6.984 0.000 0.606 0.359 ## q08_02 0.598 0.079 7.546 0.000 0.598 0.417 ## q08_03 0.689 0.099 6.963 0.000 0.689 0.466 ## q08_04 0.433 0.087 4.994 0.000 0.433 0.386 ## q08_05 0.468 0.074 6.290 0.000 0.468 0.312 ## q09_01 1.997 0.193 10.327 0.000 1.997 0.562 ## q10_01 1.284 0.156 8.214 0.000 1.284 0.335 ## q11_01 1.678 0.198 8.492 0.000 1.678 0.463 ## q12_01 1.546 0.220 7.030 0.000 1.546 0.400 ## q13_01 1.420 0.154 9.231 0.000 1.420 0.349 ## q14_01 1.940 0.197 9.860 0.000 1.940 0.686 ## q17_01 3.008 0.282 10.673 0.000 3.008 0.848 ## q30_01 0.497 0.062 8.029 0.000 0.497 0.486 ## q37_01 0.429 0.043 9.875 0.000 0.429 0.391 ## q38_01 0.466 0.047 9.853 0.000 0.466 0.475 ## q39_01 0.394 0.041 9.563 0.000 0.394 0.384 ## q40_01 0.435 0.080 5.430 0.000 0.435 0.400 ## q24_01 0.744 0.129 5.786 0.000 0.744 0.618 Appendix E Survey Factor Analysis 481 ## IS2 0.432 0.121 3.561 0.000 0.796 0.796 ## AE2 0.840 0.191 4.399 0.000 0.777 0.777 ## SOPI2 1.558 0.255 6.098 0.000 1.000 1.000 ## AE1 0.213 0.049 4.382 0.000 0.404 0.404 ## ## R-Square: ## Estimate ## q15_01 0.260 ## q19_01 0.344 ## q20_01 0.577 ## q21_01 0.616 ## q22_01 0.636 ## q23_01 0.654 ## q08_01 0.641 ## q08_02 0.583 ## q08_03 0.534 ## q08_04 0.614 ## q08_05 0.688 ## q09_01 0.438 ## q10_01 0.665 ## q11_01 0.537 ## q12_01 0.600 ## q13_01 0.651 ## q14_01 0.314 ## q17_01 0.152 ## q30_01 0.514 ## q37_01 0.609 ## q38_01 0.525 ## q39_01 0.616 ## q40_01 0.600 ## q24_01 0.382 ## IS2 0.204 ## AE2 0.223 ## AE1 0.596 ## ## Defined Parameters: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## DirectAE1 -0.087 0.055 -1.578 0.115 -0.105 -0.105 ## DirectAE2 -0.077 0.032 -2.416 0.016 -0.132 -0.132 ## IndirectAE1 0.192 0.049 3.909 0.000 0.231 0.231 ## IndirectAE2 0.216 0.048 4.521 0.000 0.372 0.372 ## TotalAE1 0.105 0.064 1.632 0.103 0.126 0.126 ## TotalAE2 0.139 0.043 3.212 0.001 0.239 0.239 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 482 AZM Model Stratified Mediation Effect SEM 8 Mid-Mgmt fit <- sem(model, d0[d0$Man_Level == "Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 44 iterations ## ## Number of observations 141 ## ## Estimator ML Robust ## Minimum Function Test Statistic 449.369 349.868 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.284 ## for the Satorra-Bentler correction (Mplus variant) ## Appendix E Survey Factor Analysis 483 ## Model test baseline model: ## ## Minimum Function Test Statistic 2243.759 1770.803 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.897 0.931 ## Tucker-Lewis Index (TLI) 0.884 0.922 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -5048.568 -5048.568 ## Loglikelihood unrestricted model (H1) -4823.884 -4823.884 ## ## Number of free parameters 78 78 ## Akaike (AIC) 10253.136 10253.136 ## Bayesian (BIC) 10483.139 10483.139 ## Sample-size adjusted Bayesian (BIC) 10236.349 10236.349 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.077 0.055 ## 90 Percent Confidence Interval 0.065 0.088 0.043 0.066 ## P-value RMSEA <= 0.05 0.000 0.247 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.071 0.071 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.024 0.709 ## q19_01 0.886 0.131 6.763 0.000 0.907 0.651 ## q20_01 1.014 0.098 10.326 0.000 1.038 0.733 ## q21_01 1.066 0.086 12.348 0.000 1.091 0.842 ## q22_01 1.112 0.087 12.711 0.000 1.138 0.882 ## q23_01 0.985 0.093 10.557 0.000 1.009 0.783 ## AE2 =~ ## q08_01 1.000 1.257 0.847 Appendix E Survey Factor Analysis 484 ## q08_02 0.931 0.060 15.619 0.000 1.170 0.817 ## q08_03 0.761 0.077 9.838 0.000 0.956 0.675 ## q08_04 0.765 0.061 12.508 0.000 0.961 0.768 ## q08_05 0.850 0.061 13.926 0.000 1.069 0.792 ## SOPI2 =~ ## q09_01 1.000 1.036 0.660 ## q10_01 1.030 0.136 7.587 0.000 1.067 0.632 ## q11_01 0.783 0.145 5.413 0.000 0.811 0.523 ## q12_01 1.252 0.145 8.638 0.000 1.297 0.744 ## q13_01 1.332 0.146 9.139 0.000 1.380 0.786 ## q14_01 0.804 0.123 6.519 0.000 0.833 0.559 ## q17_01 0.520 0.125 4.168 0.000 0.539 0.373 ## AE1 =~ ## q30_01 1.000 0.824 0.688 ## q37_01 1.193 0.126 9.446 0.000 0.984 0.793 ## q38_01 1.334 0.170 7.864 0.000 1.100 0.847 ## q39_01 1.320 0.168 7.869 0.000 1.088 0.831 ## q40_01 1.264 0.128 9.910 0.000 1.042 0.847 ## q24_01 1.014 0.143 7.097 0.000 0.836 0.655 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 (a) 0.379 0.122 3.112 0.002 0.383 0.383 ## AE2 ~ ## IS2 (d) 0.686 0.094 7.282 0.000 0.558 0.558 ## AE1 ~ ## IS2 (e) 0.644 0.099 6.535 0.000 0.799 0.799 ## AE2 ~ ## SOPI2 (b) 0.101 0.084 1.205 0.228 0.084 0.084 ## AE1 ~ ## SOPI2 (c) 0.087 0.042 2.061 0.039 0.109 0.109 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.193 0.051 3.795 0.000 0.438 0.438 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.943 0.122 40.681 0.000 4.943 3.426 ## q19_01 4.851 0.117 41.332 0.000 4.851 3.481 ## q20_01 4.773 0.119 40.024 0.000 4.773 3.371 ## q21_01 4.823 0.109 44.211 0.000 4.823 3.723 ## q22_01 4.865 0.109 44.803 0.000 4.865 3.773 ## q23_01 5.028 0.108 46.360 0.000 5.028 3.904 ## q08_01 5.284 0.125 42.269 0.000 5.284 3.560 Appendix E Survey Factor Analysis 485 ## q08_02 5.326 0.121 44.177 0.000 5.326 3.720 ## q08_03 5.206 0.119 43.629 0.000 5.206 3.674 ## q08_04 5.433 0.105 51.572 0.000 5.433 4.343 ## q08_05 5.319 0.114 46.817 0.000 5.319 3.943 ## q09_01 4.184 0.132 31.663 0.000 4.184 2.667 ## q10_01 4.170 0.142 29.330 0.000 4.170 2.470 ## q11_01 4.383 0.131 33.549 0.000 4.383 2.825 ## q12_01 4.142 0.147 28.190 0.000 4.142 2.374 ## q13_01 4.277 0.148 28.938 0.000 4.277 2.437 ## q14_01 4.447 0.125 35.449 0.000 4.447 2.985 ## q17_01 3.801 0.122 31.237 0.000 3.801 2.631 ## q30_01 5.418 0.101 53.713 0.000 5.418 4.523 ## q37_01 5.291 0.104 50.633 0.000 5.291 4.264 ## q38_01 5.270 0.109 48.186 0.000 5.270 4.058 ## q39_01 5.191 0.110 47.071 0.000 5.191 3.964 ## q40_01 5.383 0.104 51.967 0.000 5.383 4.376 ## q24_01 5.142 0.107 47.892 0.000 5.142 4.033 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.034 0.146 7.087 0.000 1.034 0.497 ## q19_01 1.120 0.164 6.834 0.000 1.120 0.577 ## q20_01 0.928 0.101 9.199 0.000 0.928 0.463 ## q21_01 0.487 0.065 7.531 0.000 0.487 0.290 ## q22_01 0.368 0.055 6.697 0.000 0.368 0.221 ## q23_01 0.641 0.072 8.863 0.000 0.641 0.387 ## q08_01 0.623 0.098 6.330 0.000 0.623 0.283 ## q08_02 0.680 0.103 6.621 0.000 0.680 0.332 ## q08_03 1.093 0.163 6.714 0.000 1.093 0.545 ## q08_04 0.641 0.079 8.100 0.000 0.641 0.410 ## q08_05 0.678 0.104 6.504 0.000 0.678 0.373 ## q09_01 1.389 0.122 11.377 0.000 1.389 0.564 ## q10_01 1.712 0.253 6.775 0.000 1.712 0.600 ## q11_01 1.749 0.198 8.812 0.000 1.749 0.727 ## q12_01 1.361 0.176 7.722 0.000 1.361 0.447 ## q13_01 1.176 0.182 6.467 0.000 1.176 0.382 ## q14_01 1.524 0.174 8.785 0.000 1.524 0.687 ## q17_01 1.798 0.156 11.538 0.000 1.798 0.861 ## q30_01 0.755 0.077 9.790 0.000 0.755 0.526 ## q37_01 0.572 0.059 9.623 0.000 0.572 0.371 ## q38_01 0.477 0.052 9.099 0.000 0.477 0.283 ## q39_01 0.531 0.084 6.295 0.000 0.531 0.310 ## q40_01 0.426 0.053 8.114 0.000 0.426 0.282 Appendix E Survey Factor Analysis 486 ## q24_01 0.927 0.106 8.739 0.000 0.927 0.570 ## IS2 0.894 0.175 5.100 0.000 0.853 0.853 ## AE2 1.020 0.166 6.160 0.000 0.645 0.645 ## SOPI2 1.073 0.233 4.610 0.000 1.000 1.000 ## AE1 0.192 0.057 3.353 0.001 0.282 0.282 ## ## R-Square: ## Estimate ## q15_01 0.503 ## q19_01 0.423 ## q20_01 0.537 ## q21_01 0.710 ## q22_01 0.779 ## q23_01 0.613 ## q08_01 0.717 ## q08_02 0.668 ## q08_03 0.455 ## q08_04 0.590 ## q08_05 0.627 ## q09_01 0.436 ## q10_01 0.400 ## q11_01 0.273 ## q12_01 0.553 ## q13_01 0.618 ## q14_01 0.313 ## q17_01 0.139 ## q30_01 0.474 ## q37_01 0.629 ## q38_01 0.717 ## q39_01 0.690 ## q40_01 0.718 ## q24_01 0.430 ## IS2 0.147 ## AE2 0.355 ## AE1 0.718 ## ## Defined Parameters: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## DirectAE1 0.101 0.084 1.205 0.228 0.084 0.084 ## DirectAE2 0.087 0.042 2.061 0.039 0.109 0.109 ## IndirectAE1 0.260 0.086 3.031 0.002 0.214 0.214 ## IndirectAE2 0.244 0.079 3.081 0.002 0.306 0.306 ## TotalAE1 0.361 0.118 3.064 0.002 0.298 0.298 ## TotalAE2 0.331 0.100 3.306 0.001 0.416 0.416 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 487 AZM Model Stratified Mediation Effect SEM 9 Non-Mgmt fit <- sem(model, d0[d0$Man_Level == "Non management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 48 iterations ## ## Number of observations 175 ## ## Estimator ML Robust ## Minimum Function Test Statistic 475.341 371.902 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.278 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: Appendix E Survey Factor Analysis 488 ## ## Minimum Function Test Statistic 3085.963 2457.755 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.918 0.942 ## Tucker-Lewis Index (TLI) 0.908 0.935 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -6616.906 -6616.906 ## Loglikelihood unrestricted model (H1) -6379.236 -6379.236 ## ## Number of free parameters 78 78 ## Akaike (AIC) 13389.813 13389.813 ## Bayesian (BIC) 13636.666 13636.666 ## Sample-size adjusted Bayesian (BIC) 13389.664 13389.664 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.073 0.054 ## 90 Percent Confidence Interval 0.063 0.083 0.044 0.064 ## P-value RMSEA <= 0.05 0.000 0.244 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.081 0.081 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 1.120 0.685 ## q19_01 0.913 0.095 9.618 0.000 1.022 0.623 ## q20_01 0.912 0.088 10.332 0.000 1.021 0.674 ## q21_01 1.132 0.087 13.023 0.000 1.268 0.887 ## q22_01 1.104 0.090 12.265 0.000 1.236 0.854 ## q23_01 1.094 0.088 12.448 0.000 1.225 0.884 ## AE2 =~ ## q08_01 1.000 1.407 0.825 ## q08_02 0.938 0.067 14.088 0.000 1.319 0.770 Appendix E Survey Factor Analysis 489 ## q08_03 0.952 0.056 16.966 0.000 1.340 0.781 ## q08_04 0.913 0.072 12.746 0.000 1.285 0.815 ## q08_05 0.950 0.075 12.645 0.000 1.336 0.794 ## SOPI2 =~ ## q09_01 1.000 1.028 0.612 ## q10_01 1.296 0.142 9.123 0.000 1.333 0.787 ## q11_01 0.763 0.150 5.077 0.000 0.785 0.445 ## q12_01 1.152 0.150 7.689 0.000 1.184 0.705 ## q13_01 1.299 0.157 8.249 0.000 1.335 0.748 ## q14_01 0.936 0.131 7.158 0.000 0.962 0.601 ## q17_01 0.348 0.145 2.402 0.016 0.358 0.223 ## AE1 =~ ## q30_01 1.000 1.072 0.742 ## q37_01 1.232 0.092 13.461 0.000 1.321 0.873 ## q38_01 1.323 0.100 13.189 0.000 1.418 0.906 ## q39_01 1.169 0.101 11.593 0.000 1.253 0.867 ## q40_01 1.344 0.094 14.337 0.000 1.442 0.939 ## q24_01 1.028 0.100 10.247 0.000 1.102 0.677 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 (a) 0.424 0.104 4.064 0.000 0.389 0.389 ## AE2 ~ ## IS2 (d) 0.763 0.091 8.355 0.000 0.607 0.607 ## AE1 ~ ## IS2 (e) 0.747 0.089 8.430 0.000 0.780 0.780 ## AE2 ~ ## SOPI2 (b) 0.233 0.101 2.316 0.021 0.170 0.170 ## AE1 ~ ## SOPI2 (c) 0.136 0.047 2.895 0.004 0.130 0.130 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.237 0.072 3.300 0.001 0.400 0.400 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 4.474 0.124 36.208 0.000 4.474 2.737 ## q19_01 4.560 0.124 36.765 0.000 4.560 2.779 ## q20_01 4.383 0.114 38.286 0.000 4.383 2.894 ## q21_01 4.543 0.108 42.051 0.000 4.543 3.179 ## q22_01 4.663 0.109 42.589 0.000 4.663 3.219 ## q23_01 4.640 0.105 44.277 0.000 4.640 3.347 ## q08_01 4.440 0.129 34.436 0.000 4.440 2.603 ## q08_02 4.806 0.129 37.117 0.000 4.806 2.806 Appendix E Survey Factor Analysis 490 ## q08_03 4.457 0.130 34.357 0.000 4.457 2.597 ## q08_04 4.623 0.119 38.784 0.000 4.623 2.932 ## q08_05 4.549 0.127 35.765 0.000 4.549 2.704 ## q09_01 3.206 0.127 25.225 0.000 3.206 1.907 ## q10_01 3.223 0.128 25.165 0.000 3.223 1.902 ## q11_01 3.840 0.133 28.772 0.000 3.840 2.175 ## q12_01 3.291 0.127 25.912 0.000 3.291 1.959 ## q13_01 3.223 0.135 23.869 0.000 3.223 1.804 ## q14_01 3.680 0.121 30.423 0.000 3.680 2.300 ## q17_01 3.697 0.121 30.536 0.000 3.697 2.308 ## q30_01 4.869 0.109 44.536 0.000 4.869 3.367 ## q37_01 4.777 0.114 41.784 0.000 4.777 3.159 ## q38_01 4.623 0.118 39.054 0.000 4.623 2.952 ## q39_01 4.474 0.109 40.949 0.000 4.474 3.095 ## q40_01 4.754 0.116 40.968 0.000 4.754 3.097 ## q24_01 4.417 0.123 35.864 0.000 4.417 2.711 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.419 0.188 7.533 0.000 1.419 0.531 ## q19_01 1.648 0.182 9.037 0.000 1.648 0.612 ## q20_01 1.251 0.126 9.939 0.000 1.251 0.545 ## q21_01 0.435 0.105 4.128 0.000 0.435 0.213 ## q22_01 0.570 0.088 6.458 0.000 0.570 0.272 ## q23_01 0.421 0.065 6.513 0.000 0.421 0.219 ## q08_01 0.930 0.131 7.078 0.000 0.930 0.320 ## q08_02 1.194 0.174 6.859 0.000 1.194 0.407 ## q08_03 1.151 0.161 7.148 0.000 1.151 0.391 ## q08_04 0.836 0.128 6.516 0.000 0.836 0.336 ## q08_05 1.044 0.132 7.927 0.000 1.044 0.369 ## q09_01 1.769 0.243 7.276 0.000 1.769 0.626 ## q10_01 1.094 0.141 7.763 0.000 1.094 0.381 ## q11_01 2.501 0.276 9.072 0.000 2.501 0.802 ## q12_01 1.421 0.183 7.745 0.000 1.421 0.503 ## q13_01 1.407 0.179 7.871 0.000 1.407 0.441 ## q14_01 1.634 0.212 7.717 0.000 1.634 0.638 ## q17_01 2.437 0.213 11.417 0.000 2.437 0.950 ## q30_01 0.941 0.103 9.173 0.000 0.941 0.450 ## q37_01 0.542 0.064 8.441 0.000 0.542 0.237 ## q38_01 0.440 0.068 6.438 0.000 0.440 0.179 ## q39_01 0.519 0.060 8.608 0.000 0.519 0.248 ## q40_01 0.278 0.045 6.167 0.000 0.278 0.118 ## q24_01 1.439 0.199 7.246 0.000 1.439 0.542 Appendix E Survey Factor Analysis 491 ## IS2 1.064 0.149 7.121 0.000 0.848 0.848 ## AE2 1.033 0.179 5.773 0.000 0.522 0.522 ## SOPI2 1.057 0.225 4.691 0.000 1.000 1.000 ## AE1 0.339 0.078 4.344 0.000 0.295 0.295 ## ## R-Square: ## Estimate ## q15_01 0.469 ## q19_01 0.388 ## q20_01 0.455 ## q21_01 0.787 ## q22_01 0.728 ## q23_01 0.781 ## q08_01 0.680 ## q08_02 0.593 ## q08_03 0.609 ## q08_04 0.664 ## q08_05 0.631 ## q09_01 0.374 ## q10_01 0.619 ## q11_01 0.198 ## q12_01 0.497 ## q13_01 0.559 ## q14_01 0.362 ## q17_01 0.050 ## q30_01 0.550 ## q37_01 0.763 ## q38_01 0.821 ## q39_01 0.752 ## q40_01 0.882 ## q24_01 0.458 ## IS2 0.152 ## AE2 0.478 ## AE1 0.705 ## ## Defined Parameters: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## DirectAE1 0.233 0.101 2.316 0.021 0.170 0.170 ## DirectAE2 0.136 0.047 2.895 0.004 0.130 0.130 ## IndirectAE1 0.323 0.083 3.879 0.000 0.236 0.236 ## IndirectAE2 0.317 0.081 3.933 0.000 0.304 0.304 ## TotalAE1 0.556 0.125 4.455 0.000 0.407 0.407 ## TotalAE2 0.453 0.097 4.680 0.000 0.434 0.434 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 492 AZM Model Stratified Mediation Effect SEM 10 Non-Mgmt fit <- sem(model, d0[d0$Man_Level == "High level management" | d0$Man_Level =="Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 44 iterations ## ## Number of observations 329 ## ## Estimator ML Robust ## Minimum Function Test Statistic 509.365 373.178 ## Degrees of freedom 246 246 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.365 ## for the Satorra-Bentler correction (Mplus variant) Appendix E Survey Factor Analysis 493 ## ## Model test baseline model: ## ## Minimum Function Test Statistic 4702.646 3513.228 ## Degrees of freedom 276 276 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.941 0.961 ## Tucker-Lewis Index (TLI) 0.933 0.956 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -11779.011 -11779.011 ## Loglikelihood unrestricted model (H1) -11524.328 -11524.328 ## ## Number of free parameters 78 78 ## Akaike (AIC) 23714.021 23714.021 ## Bayesian (BIC) 24010.114 24010.114 ## Sample-size adjusted Bayesian (BIC) 23762.698 23762.698 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.057 0.040 ## 90 Percent Confidence Interval 0.050 0.064 0.033 0.046 ## P-value RMSEA <= 0.05 0.049 0.995 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.047 0.047 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 =~ ## q15_01 1.000 0.867 0.597 ## q19_01 1.007 0.100 10.075 0.000 0.874 0.631 ## q20_01 1.200 0.109 10.977 0.000 1.040 0.747 ## q21_01 1.250 0.106 11.758 0.000 1.085 0.824 ## q22_01 1.227 0.104 11.836 0.000 1.064 0.850 ## q23_01 1.162 0.103 11.233 0.000 1.008 0.795 ## AE2 =~ Appendix E Survey Factor Analysis 494 ## q08_01 1.000 1.158 0.823 ## q08_02 0.926 0.058 15.906 0.000 1.072 0.799 ## q08_03 0.848 0.055 15.433 0.000 0.981 0.729 ## q08_04 0.807 0.048 16.833 0.000 0.934 0.793 ## q08_05 0.933 0.050 18.597 0.000 1.080 0.826 ## SOPI2 =~ ## q09_01 1.000 1.175 0.667 ## q10_01 1.177 0.096 12.277 0.000 1.383 0.748 ## q11_01 0.982 0.094 10.469 0.000 1.154 0.655 ## q12_01 1.219 0.099 12.351 0.000 1.432 0.763 ## q13_01 1.297 0.099 13.170 0.000 1.524 0.798 ## q14_01 0.776 0.073 10.589 0.000 0.912 0.569 ## q17_01 0.577 0.085 6.814 0.000 0.678 0.396 ## AE1 =~ ## q30_01 1.000 0.805 0.717 ## q37_01 1.164 0.078 14.844 0.000 0.936 0.800 ## q38_01 1.172 0.097 12.040 0.000 0.943 0.805 ## q39_01 1.215 0.100 12.121 0.000 0.978 0.825 ## q40_01 1.171 0.080 14.641 0.000 0.943 0.819 ## q24_01 1.033 0.095 10.854 0.000 0.831 0.673 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## IS2 ~ ## SOPI2 (a) 0.311 0.061 5.132 0.000 0.421 0.421 ## AE2 ~ ## IS2 (d) 0.769 0.097 7.911 0.000 0.577 0.577 ## AE1 ~ ## IS2 (e) 0.786 0.088 8.886 0.000 0.847 0.847 ## AE2 ~ ## SOPI2 (b) -0.053 0.050 -1.080 0.280 -0.054 -0.054 ## AE1 ~ ## SOPI2 (c) -0.035 0.027 -1.301 0.193 -0.052 -0.052 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2 ~~ ## AE1 0.210 0.038 5.495 0.000 0.482 0.482 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 5.119 0.080 63.906 0.000 5.119 3.523 ## q19_01 5.116 0.076 66.967 0.000 5.116 3.692 ## q20_01 5.009 0.077 65.195 0.000 5.009 3.594 ## q21_01 5.128 0.073 70.621 0.000 5.128 3.893 ## q22_01 5.216 0.069 75.566 0.000 5.216 4.166 ## q23_01 5.280 0.070 75.574 0.000 5.280 4.167 Appendix E Survey Factor Analysis 495 ## q08_01 5.593 0.078 72.092 0.000 5.593 3.975 ## q08_02 5.693 0.074 76.980 0.000 5.693 4.244 ## q08_03 5.584 0.074 75.250 0.000 5.584 4.149 ## q08_04 5.748 0.065 88.548 0.000 5.748 4.882 ## q08_05 5.629 0.072 78.069 0.000 5.629 4.304 ## q09_01 4.328 0.097 44.575 0.000 4.328 2.458 ## q10_01 4.237 0.102 41.589 0.000 4.237 2.293 ## q11_01 4.465 0.097 45.955 0.000 4.465 2.534 ## q12_01 4.255 0.103 41.131 0.000 4.255 2.268 ## q13_01 4.292 0.105 40.761 0.000 4.292 2.247 ## q14_01 4.529 0.088 51.228 0.000 4.529 2.824 ## q17_01 3.942 0.094 41.732 0.000 3.942 2.301 ## q30_01 5.702 0.062 92.110 0.000 5.702 5.078 ## q37_01 5.626 0.065 87.159 0.000 5.626 4.805 ## q38_01 5.617 0.065 86.947 0.000 5.617 4.794 ## q39_01 5.526 0.065 84.578 0.000 5.526 4.663 ## q40_01 5.657 0.063 89.106 0.000 5.657 4.913 ## q24_01 5.578 0.068 81.878 0.000 5.578 4.514 ## IS2 0.000 0.000 0.000 ## AE2 0.000 0.000 0.000 ## SOPI2 0.000 0.000 0.000 ## AE1 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q15_01 1.358 0.111 12.214 0.000 1.358 0.644 ## q19_01 1.156 0.129 8.930 0.000 1.156 0.602 ## q20_01 0.860 0.081 10.674 0.000 0.860 0.443 ## q21_01 0.558 0.048 11.649 0.000 0.558 0.322 ## q22_01 0.435 0.045 9.629 0.000 0.435 0.277 ## q23_01 0.590 0.069 8.501 0.000 0.590 0.368 ## q08_01 0.640 0.066 9.635 0.000 0.640 0.323 ## q08_02 0.650 0.065 9.935 0.000 0.650 0.361 ## q08_03 0.848 0.105 8.040 0.000 0.848 0.468 ## q08_04 0.514 0.067 7.677 0.000 0.514 0.371 ## q08_05 0.543 0.081 6.682 0.000 0.543 0.318 ## q09_01 1.721 0.155 11.071 0.000 1.721 0.555 ## q10_01 1.503 0.170 8.852 0.000 1.503 0.440 ## q11_01 1.775 0.183 9.723 0.000 1.775 0.571 ## q12_01 1.471 0.157 9.370 0.000 1.471 0.418 ## q13_01 1.324 0.142 9.342 0.000 1.324 0.363 ## q14_01 1.740 0.171 10.157 0.000 1.740 0.677 ## q17_01 2.476 0.178 13.874 0.000 2.476 0.843 ## q30_01 0.613 0.056 10.891 0.000 0.613 0.486 ## q37_01 0.494 0.039 12.598 0.000 0.494 0.360 ## q38_01 0.483 0.041 11.920 0.000 0.483 0.352 ## q39_01 0.449 0.046 9.842 0.000 0.449 0.319 Appendix E Survey Factor Analysis 496 ## q40_01 0.437 0.066 6.661 0.000 0.437 0.330 ## q24_01 0.836 0.097 8.656 0.000 0.836 0.548 ## IS2 0.619 0.110 5.602 0.000 0.822 0.822 ## AE2 0.926 0.129 7.190 0.000 0.691 0.691 ## SOPI2 1.381 0.194 7.129 0.000 1.000 1.000 ## AE1 0.206 0.041 5.030 0.000 0.318 0.318 ## ## R-Square: ## Estimate ## q15_01 0.356 ## q19_01 0.398 ## q20_01 0.557 ## q21_01 0.678 ## q22_01 0.723 ## q23_01 0.632 ## q08_01 0.677 ## q08_02 0.639 ## q08_03 0.532 ## q08_04 0.629 ## q08_05 0.682 ## q09_01 0.445 ## q10_01 0.560 ## q11_01 0.429 ## q12_01 0.582 ## q13_01 0.637 ## q14_01 0.323 ## q17_01 0.157 ## q30_01 0.514 ## q37_01 0.640 ## q38_01 0.648 ## q39_01 0.681 ## q40_01 0.670 ## q24_01 0.452 ## IS2 0.178 ## AE2 0.309 ## AE1 0.682 ## ## Defined Parameters: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## DirectAE1 -0.053 0.050 -1.080 0.280 -0.054 -0.054 ## DirectAE2 -0.035 0.027 -1.301 0.193 -0.052 -0.052 ## IndirectAE1 0.239 0.050 4.832 0.000 0.243 0.243 ## IndirectAE2 0.244 0.048 5.098 0.000 0.357 0.357 ## TotalAE1 0.186 0.058 3.183 0.001 0.189 0.189 ## TotalAE2 0.209 0.050 4.209 0.000 0.305 0.305 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 497 BFM Model Stratified SEM 2 High-Mgmt fit <- sem(model, d0[d0$Man_Level == "High level management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 38 iterations ## ## Number of observations 188 ## ## Estimator ML Robust ## Minimum Function Test Statistic 623.841 429.119 ## Degrees of freedom 269 269 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.454 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: Appendix E Survey Factor Analysis 498 ## ## Minimum Function Test Statistic 3591.415 2510.901 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.892 0.928 ## Tucker-Lewis Index (TLI) 0.880 0.919 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -5817.018 -5817.018 ## Loglikelihood unrestricted model (H1) -5505.098 -5505.098 ## ## Number of free parameters 81 81 ## Akaike (AIC) 11796.036 11796.036 ## Bayesian (BIC) 12058.188 12058.188 ## Sample-size adjusted Bayesian (BIC) 11801.623 11801.623 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.084 0.056 ## 90 Percent Confidence Interval 0.075 0.092 0.048 0.064 ## P-value RMSEA <= 0.05 0.000 0.106 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.049 0.049 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B =~ ## q08_01 1.000 0.996 0.767 ## q08_02 0.943 0.105 8.945 0.000 0.939 0.784 ## q08_03 0.916 0.089 10.313 0.000 0.912 0.750 ## AE1B =~ ## q29_01 1.000 0.687 0.725 ## q30_01 1.022 0.084 12.226 0.000 0.702 0.694 ## q33_01 1.103 0.101 10.951 0.000 0.758 0.663 ## q35_01 1.563 0.137 11.376 0.000 1.074 0.838 ## q35_02 1.389 0.129 10.755 0.000 0.955 0.801 Appendix E Survey Factor Analysis 499 ## q35_03 1.238 0.122 10.178 0.000 0.851 0.774 ## q35_04 1.423 0.117 12.184 0.000 0.978 0.834 ## q36_01 1.048 0.084 12.536 0.000 0.720 0.731 ## q37_01 1.087 0.097 11.242 0.000 0.747 0.713 ## q38_01 1.047 0.087 12.047 0.000 0.719 0.727 ## q39_01 1.140 0.095 11.950 0.000 0.784 0.774 ## q40_01 1.085 0.092 11.817 0.000 0.746 0.715 ## SIR =~ ## q28_01 1.000 0.823 0.775 ## q28_02 1.267 0.093 13.631 0.000 1.042 0.898 ## q28_03 1.167 0.094 12.415 0.000 0.960 0.852 ## q28_04 1.018 0.074 13.715 0.000 0.837 0.795 ## DL =~ ## q25_01 1.000 0.980 0.778 ## q25_02 1.112 0.077 14.418 0.000 1.090 0.776 ## q25_03 0.945 0.056 16.828 0.000 0.927 0.763 ## q32_01 1.069 0.073 14.565 0.000 1.048 0.827 ## q32_02 0.979 0.078 12.574 0.000 0.960 0.742 ## q32_03 0.987 0.073 13.570 0.000 0.967 0.762 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE1B ~ ## SIR 0.630 0.082 7.706 0.000 0.754 0.754 ## DL 0.141 0.055 2.563 0.010 0.202 0.202 ## AE2B ~ ## SIR 0.702 0.104 6.749 0.000 0.580 0.580 ## DL 0.044 0.073 0.597 0.551 0.043 0.043 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## SIR ~~ ## DL 0.455 0.077 5.937 0.000 0.564 0.564 ## AE2B ~~ ## AE1B 0.035 0.028 1.241 0.215 0.136 0.136 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 5.824 0.095 61.481 0.000 5.824 4.484 ## q08_02 5.968 0.087 68.307 0.000 5.968 4.982 ## q08_03 5.867 0.089 66.192 0.000 5.867 4.828 ## q29_01 5.963 0.069 86.297 0.000 5.963 6.294 ## q30_01 5.915 0.074 80.119 0.000 5.915 5.843 ## q33_01 5.787 0.083 69.440 0.000 5.787 5.064 ## q35_01 5.601 0.094 59.895 0.000 5.601 4.368 ## q35_02 5.676 0.087 65.274 0.000 5.676 4.761 ## q35_03 5.787 0.080 72.135 0.000 5.787 5.261 Appendix E Survey Factor Analysis 500 ## q35_04 5.739 0.085 67.146 0.000 5.739 4.897 ## q36_01 5.915 0.072 82.283 0.000 5.915 6.001 ## q37_01 5.878 0.076 76.954 0.000 5.878 5.612 ## q38_01 5.878 0.072 81.421 0.000 5.878 5.938 ## q39_01 5.777 0.074 78.250 0.000 5.777 5.707 ## q40_01 5.862 0.076 77.079 0.000 5.862 5.622 ## q28_01 5.803 0.077 74.980 0.000 5.803 5.468 ## q28_02 5.702 0.085 67.354 0.000 5.702 4.912 ## q28_03 5.798 0.082 70.607 0.000 5.798 5.150 ## q28_04 5.777 0.077 75.189 0.000 5.777 5.484 ## q25_01 5.351 0.092 58.195 0.000 5.351 4.244 ## q25_02 5.271 0.102 51.439 0.000 5.271 3.752 ## q25_03 5.473 0.089 61.841 0.000 5.473 4.510 ## q32_01 5.383 0.093 58.193 0.000 5.383 4.244 ## q32_02 5.447 0.094 57.745 0.000 5.447 4.211 ## q32_03 5.495 0.093 59.345 0.000 5.495 4.328 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 0.695 0.096 7.280 0.000 0.695 0.412 ## q08_02 0.553 0.083 6.678 0.000 0.553 0.385 ## q08_03 0.646 0.095 6.768 0.000 0.646 0.437 ## q29_01 0.425 0.045 9.529 0.000 0.425 0.474 ## q30_01 0.531 0.068 7.849 0.000 0.531 0.519 ## q33_01 0.731 0.078 9.331 0.000 0.731 0.560 ## q35_01 0.490 0.049 9.977 0.000 0.490 0.298 ## q35_02 0.509 0.041 12.282 0.000 0.509 0.358 ## q35_03 0.486 0.053 9.140 0.000 0.486 0.402 ## q35_04 0.417 0.038 11.119 0.000 0.417 0.304 ## q36_01 0.452 0.040 11.448 0.000 0.452 0.466 ## q37_01 0.539 0.041 12.989 0.000 0.539 0.491 ## q38_01 0.462 0.051 9.115 0.000 0.462 0.472 ## q39_01 0.410 0.043 9.444 0.000 0.410 0.400 ## q40_01 0.531 0.069 7.656 0.000 0.531 0.489 ## q28_01 0.450 0.051 8.822 0.000 0.450 0.399 ## q28_02 0.262 0.042 6.205 0.000 0.262 0.194 ## q28_03 0.347 0.037 9.469 0.000 0.347 0.273 ## q28_04 0.409 0.039 10.554 0.000 0.409 0.368 ## q25_01 0.628 0.063 9.924 0.000 0.628 0.395 ## q25_02 0.786 0.097 8.103 0.000 0.786 0.398 ## q25_03 0.614 0.055 11.243 0.000 0.614 0.417 ## q32_01 0.510 0.068 7.539 0.000 0.510 0.317 ## q32_02 0.751 0.083 9.073 0.000 0.751 0.449 Appendix E Survey Factor Analysis 501 ## q32_03 0.676 0.087 7.759 0.000 0.676 0.419 ## AE2B 0.629 0.178 3.538 0.000 0.634 0.634 ## AE1B 0.103 0.026 3.919 0.000 0.219 0.219 ## SIR 0.677 0.101 6.698 0.000 1.000 1.000 ## DL 0.961 0.154 6.232 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.588 ## q08_02 0.615 ## q08_03 0.563 ## q29_01 0.526 ## q30_01 0.481 ## q33_01 0.440 ## q35_01 0.702 ## q35_02 0.642 ## q35_03 0.598 ## q35_04 0.696 ## q36_01 0.534 ## q37_01 0.509 ## q38_01 0.528 ## q39_01 0.600 ## q40_01 0.511 ## q28_01 0.601 ## q28_02 0.806 ## q28_03 0.727 ## q28_04 0.632 ## q25_01 0.605 ## q25_02 0.602 ## q25_03 0.583 ## q32_01 0.683 ## q32_02 0.551 ## q32_03 0.581 ## AE2B 0.366 ## AE1B 0.781 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 502 BFM Model Stratified SEM 3 Mid-Mgmt fit <- sem(model, d0[d0$Man_Level == "Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 47 iterations ## ## Number of observations 141 ## ## Estimator ML Robust ## Minimum Function Test Statistic 831.862 638.484 ## Degrees of freedom 269 269 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.303 ## for the Satorra-Bentler correction (Mplus variant) Appendix E Survey Factor Analysis 503 ## ## Model test baseline model: ## ## Minimum Function Test Statistic 3175.938 2480.795 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.804 0.831 ## Tucker-Lewis Index (TLI) 0.782 0.811 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -4860.788 -4860.788 ## Loglikelihood unrestricted model (H1) -4444.857 -4444.857 ## ## Number of free parameters 81 81 ## Akaike (AIC) 9883.575 9883.575 ## Bayesian (BIC) 10122.425 10122.425 ## Sample-size adjusted Bayesian (BIC) 9866.143 9866.143 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.122 0.099 ## 90 Percent Confidence Interval 0.112 0.131 0.090 0.107 ## P-value RMSEA <= 0.05 0.000 0.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.082 0.082 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B =~ ## q08_01 1.000 1.297 0.874 ## q08_02 0.936 0.071 13.146 0.000 1.213 0.848 ## q08_03 0.651 0.093 6.983 0.000 0.844 0.596 ## AE1B =~ ## q29_01 1.000 0.886 0.755 ## q30_01 0.935 0.078 11.914 0.000 0.828 0.691 ## q33_01 1.137 0.086 13.168 0.000 1.007 0.797 Appendix E Survey Factor Analysis 504 ## q35_01 1.131 0.113 10.007 0.000 1.002 0.755 ## q35_02 1.192 0.101 11.849 0.000 1.056 0.790 ## q35_03 1.247 0.100 12.535 0.000 1.105 0.849 ## q35_04 1.290 0.117 11.035 0.000 1.143 0.827 ## q36_01 1.031 0.067 15.333 0.000 0.913 0.805 ## q37_01 1.098 0.101 10.918 0.000 0.973 0.784 ## q38_01 1.151 0.100 11.463 0.000 1.020 0.785 ## q39_01 1.177 0.108 10.867 0.000 1.043 0.796 ## q40_01 1.200 0.076 15.829 0.000 1.063 0.864 ## SIR =~ ## q28_01 1.000 1.037 0.736 ## q28_02 1.074 0.087 12.300 0.000 1.115 0.845 ## q28_03 1.006 0.080 12.565 0.000 1.044 0.822 ## q28_04 1.017 0.094 10.797 0.000 1.055 0.817 ## DL =~ ## q25_01 1.000 0.861 0.607 ## q25_02 0.929 0.126 7.349 0.000 0.800 0.561 ## q25_03 0.897 0.124 7.225 0.000 0.773 0.542 ## q32_01 1.422 0.148 9.632 0.000 1.224 0.838 ## q32_02 1.489 0.176 8.473 0.000 1.282 0.859 ## q32_03 1.551 0.168 9.220 0.000 1.335 0.859 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE1B ~ ## SIR 0.428 0.065 6.627 0.000 0.501 0.501 ## DL 0.517 0.096 5.394 0.000 0.502 0.502 ## AE2B ~ ## SIR 0.409 0.119 3.451 0.001 0.327 0.327 ## DL 0.510 0.150 3.397 0.001 0.339 0.339 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## SIR ~~ ## DL 0.377 0.117 3.220 0.001 0.422 0.422 ## AE2B ~~ ## AE1B 0.258 0.064 4.025 0.000 0.510 0.510 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 5.284 0.125 42.269 0.000 5.284 3.560 ## q08_02 5.326 0.121 44.177 0.000 5.326 3.720 ## q08_03 5.206 0.119 43.629 0.000 5.206 3.674 ## q29_01 5.248 0.099 53.086 0.000 5.248 4.471 ## q30_01 5.418 0.101 53.713 0.000 5.418 4.523 ## q33_01 5.142 0.106 48.316 0.000 5.142 4.069 ## q35_01 4.901 0.112 43.825 0.000 4.901 3.691 Appendix E Survey Factor Analysis 505 ## q35_02 4.922 0.113 43.706 0.000 4.922 3.681 ## q35_03 5.099 0.110 46.546 0.000 5.099 3.920 ## q35_04 5.057 0.116 43.428 0.000 5.057 3.657 ## q36_01 5.262 0.095 55.108 0.000 5.262 4.641 ## q37_01 5.291 0.104 50.633 0.000 5.291 4.264 ## q38_01 5.270 0.109 48.186 0.000 5.270 4.058 ## q39_01 5.191 0.110 47.071 0.000 5.191 3.964 ## q40_01 5.383 0.104 51.967 0.000 5.383 4.376 ## q28_01 5.135 0.119 43.233 0.000 5.135 3.641 ## q28_02 5.184 0.111 46.682 0.000 5.184 3.931 ## q28_03 5.383 0.107 50.341 0.000 5.383 4.239 ## q28_04 5.298 0.109 48.678 0.000 5.298 4.099 ## q25_01 4.766 0.119 39.927 0.000 4.766 3.362 ## q25_02 4.745 0.120 39.503 0.000 4.745 3.327 ## q25_03 4.745 0.120 39.503 0.000 4.745 3.327 ## q32_01 4.738 0.123 38.484 0.000 4.738 3.241 ## q32_02 4.759 0.126 37.879 0.000 4.759 3.190 ## q32_03 4.752 0.131 36.319 0.000 4.752 3.059 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 0.521 0.107 4.888 0.000 0.521 0.236 ## q08_02 0.577 0.098 5.919 0.000 0.577 0.282 ## q08_03 1.295 0.244 5.312 0.000 1.295 0.645 ## q29_01 0.593 0.044 13.565 0.000 0.593 0.430 ## q30_01 0.749 0.058 12.838 0.000 0.749 0.522 ## q33_01 0.582 0.048 12.251 0.000 0.582 0.365 ## q35_01 0.759 0.054 13.946 0.000 0.759 0.430 ## q35_02 0.673 0.052 12.883 0.000 0.673 0.376 ## q35_03 0.471 0.033 14.137 0.000 0.471 0.278 ## q35_04 0.605 0.065 9.366 0.000 0.605 0.317 ## q36_01 0.452 0.044 10.300 0.000 0.452 0.351 ## q37_01 0.593 0.058 10.234 0.000 0.593 0.385 ## q38_01 0.646 0.053 12.288 0.000 0.646 0.383 ## q39_01 0.627 0.059 10.612 0.000 0.627 0.366 ## q40_01 0.383 0.036 10.711 0.000 0.383 0.253 ## q28_01 0.913 0.114 8.018 0.000 0.913 0.459 ## q28_02 0.497 0.059 8.460 0.000 0.497 0.286 ## q28_03 0.523 0.066 7.929 0.000 0.523 0.324 ## q28_04 0.556 0.068 8.122 0.000 0.556 0.333 ## q25_01 1.268 0.100 12.733 0.000 1.268 0.631 ## q25_02 1.394 0.113 12.387 0.000 1.394 0.686 ## q25_03 1.437 0.079 18.177 0.000 1.437 0.707 Appendix E Survey Factor Analysis 506 ## q32_01 0.638 0.084 7.635 0.000 0.638 0.298 ## q32_02 0.583 0.072 8.129 0.000 0.583 0.262 ## q32_03 0.631 0.085 7.399 0.000 0.631 0.261 ## AE2B 1.152 0.203 5.684 0.000 0.685 0.685 ## AE1B 0.222 0.041 5.413 0.000 0.283 0.283 ## SIR 1.076 0.179 5.996 0.000 1.000 1.000 ## DL 0.741 0.155 4.787 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.764 ## q08_02 0.718 ## q08_03 0.355 ## q29_01 0.570 ## q30_01 0.478 ## q33_01 0.635 ## q35_01 0.570 ## q35_02 0.624 ## q35_03 0.722 ## q35_04 0.683 ## q36_01 0.649 ## q37_01 0.615 ## q38_01 0.617 ## q39_01 0.634 ## q40_01 0.747 ## q28_01 0.541 ## q28_02 0.714 ## q28_03 0.676 ## q28_04 0.667 ## q25_01 0.369 ## q25_02 0.314 ## q25_03 0.293 ## q32_01 0.702 ## q32_02 0.738 ## q32_03 0.739 ## AE2B 0.315 ## AE1B 0.717 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 507 BFM Model Stratified SEM 4 Non-Mgmt fit <- sem(model, d0[d0$Man_Level == "Non management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 53 iterations ## ## Number of observations 175 ## ## Estimator ML Robust ## Minimum Function Test Statistic 962.810 705.454 ## Degrees of freedom 269 269 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.365 ## for the Satorra-Bentler correction (Mplus variant) Appendix E Survey Factor Analysis 508 ## ## Model test baseline model: ## ## Minimum Function Test Statistic 4925.818 3693.940 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.850 0.871 ## Tucker-Lewis Index (TLI) 0.833 0.857 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -6074.127 -6074.127 ## Loglikelihood unrestricted model (H1) -5592.722 -5592.722 ## ## Number of free parameters 81 81 ## Akaike (AIC) 12310.254 12310.254 ## Bayesian (BIC) 12566.601 12566.601 ## Sample-size adjusted Bayesian (BIC) 12310.099 12310.099 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.121 0.096 ## 90 Percent Confidence Interval 0.113 0.130 0.089 0.104 ## P-value RMSEA <= 0.05 0.000 0.000 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.075 0.075 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B =~ ## q08_01 1.000 1.511 0.886 ## q08_02 0.850 0.068 12.494 0.000 1.285 0.750 ## q08_03 0.869 0.067 13.049 0.000 1.312 0.765 ## AE1B =~ ## q29_01 1.000 1.077 0.709 ## q30_01 0.983 0.101 9.718 0.000 1.058 0.732 ## q33_01 1.148 0.103 11.132 0.000 1.236 0.854 Appendix E Survey Factor Analysis 509 ## q35_01 1.168 0.108 10.806 0.000 1.257 0.826 ## q35_02 1.266 0.122 10.395 0.000 1.363 0.885 ## q35_03 1.318 0.110 11.947 0.000 1.419 0.914 ## q35_04 1.294 0.111 11.701 0.000 1.394 0.904 ## q36_01 1.237 0.097 12.775 0.000 1.332 0.889 ## q37_01 1.176 0.104 11.266 0.000 1.266 0.837 ## q38_01 1.262 0.110 11.484 0.000 1.358 0.867 ## q39_01 1.134 0.098 11.591 0.000 1.221 0.845 ## q40_01 1.280 0.105 12.155 0.000 1.379 0.898 ## SIR =~ ## q28_01 1.000 1.122 0.769 ## q28_02 1.112 0.072 15.356 0.000 1.248 0.841 ## q28_03 1.192 0.093 12.814 0.000 1.337 0.927 ## q28_04 1.151 0.086 13.435 0.000 1.292 0.905 ## DL =~ ## q25_01 1.000 0.920 0.623 ## q25_02 0.991 0.115 8.613 0.000 0.912 0.640 ## q25_03 0.965 0.105 9.221 0.000 0.888 0.599 ## q32_01 1.498 0.170 8.784 0.000 1.378 0.899 ## q32_02 1.573 0.179 8.795 0.000 1.447 0.930 ## q32_03 1.642 0.187 8.756 0.000 1.510 0.917 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE1B ~ ## SIR 0.319 0.068 4.722 0.000 0.332 0.332 ## DL 0.732 0.090 8.138 0.000 0.626 0.626 ## AE2B ~ ## SIR 0.432 0.098 4.390 0.000 0.321 0.321 ## DL 0.712 0.133 5.370 0.000 0.434 0.434 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## SIR ~~ ## DL 0.501 0.140 3.566 0.000 0.485 0.485 ## AE2B ~~ ## AE1B 0.320 0.078 4.110 0.000 0.476 0.476 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 4.440 0.129 34.436 0.000 4.440 2.603 ## q08_02 4.806 0.129 37.117 0.000 4.806 2.806 ## q08_03 4.457 0.130 34.357 0.000 4.457 2.597 ## q29_01 4.663 0.115 40.642 0.000 4.663 3.072 ## q30_01 4.869 0.109 44.536 0.000 4.869 3.367 ## q33_01 4.777 0.109 43.680 0.000 4.777 3.302 ## q35_01 4.349 0.115 37.780 0.000 4.349 2.856 Appendix E Survey Factor Analysis 510 ## q35_02 4.526 0.116 38.848 0.000 4.526 2.937 ## q35_03 4.646 0.117 39.575 0.000 4.646 2.992 ## q35_04 4.531 0.116 38.900 0.000 4.531 2.941 ## q36_01 4.754 0.113 41.999 0.000 4.754 3.175 ## q37_01 4.777 0.114 41.784 0.000 4.777 3.159 ## q38_01 4.623 0.118 39.054 0.000 4.623 2.952 ## q39_01 4.474 0.109 40.949 0.000 4.474 3.095 ## q40_01 4.754 0.116 40.968 0.000 4.754 3.097 ## q28_01 4.794 0.110 43.465 0.000 4.794 3.286 ## q28_02 4.651 0.112 41.446 0.000 4.651 3.133 ## q28_03 4.909 0.109 44.991 0.000 4.909 3.401 ## q28_04 4.920 0.108 45.577 0.000 4.920 3.445 ## q25_01 4.457 0.112 39.938 0.000 4.457 3.019 ## q25_02 4.440 0.108 41.232 0.000 4.440 3.117 ## q25_03 4.571 0.112 40.779 0.000 4.571 3.083 ## q32_01 4.280 0.116 36.933 0.000 4.280 2.792 ## q32_02 4.286 0.118 36.435 0.000 4.286 2.754 ## q32_03 4.337 0.125 34.821 0.000 4.337 2.632 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 0.627 0.130 4.822 0.000 0.627 0.216 ## q08_02 1.283 0.166 7.753 0.000 1.283 0.437 ## q08_03 1.223 0.187 6.554 0.000 1.223 0.415 ## q29_01 1.144 0.116 9.887 0.000 1.144 0.497 ## q30_01 0.972 0.095 10.239 0.000 0.972 0.465 ## q33_01 0.566 0.044 12.754 0.000 0.566 0.271 ## q35_01 0.737 0.061 12.004 0.000 0.737 0.318 ## q35_02 0.517 0.042 12.275 0.000 0.517 0.217 ## q35_03 0.399 0.052 7.679 0.000 0.399 0.165 ## q35_04 0.433 0.042 10.393 0.000 0.433 0.182 ## q36_01 0.469 0.032 14.616 0.000 0.469 0.209 ## q37_01 0.684 0.052 13.035 0.000 0.684 0.299 ## q38_01 0.607 0.065 9.289 0.000 0.607 0.248 ## q39_01 0.598 0.054 11.175 0.000 0.598 0.286 ## q40_01 0.456 0.043 10.632 0.000 0.456 0.194 ## q28_01 0.870 0.138 6.310 0.000 0.870 0.409 ## q28_02 0.646 0.060 10.809 0.000 0.646 0.293 ## q28_03 0.295 0.085 3.472 0.001 0.295 0.142 ## q28_04 0.371 0.071 5.196 0.000 0.371 0.182 ## q25_01 1.333 0.115 11.607 0.000 1.333 0.612 ## q25_02 1.197 0.070 17.134 0.000 1.197 0.590 ## q25_03 1.410 0.079 17.913 0.000 1.410 0.641 Appendix E Survey Factor Analysis 511 ## q32_01 0.452 0.051 8.881 0.000 0.452 0.192 ## q32_02 0.327 0.057 5.726 0.000 0.327 0.135 ## q32_03 0.434 0.063 6.866 0.000 0.434 0.160 ## AE2B 1.311 0.178 7.355 0.000 0.574 0.574 ## AE1B 0.343 0.070 4.894 0.000 0.296 0.296 ## SIR 1.259 0.209 6.017 0.000 1.000 1.000 ## DL 0.846 0.207 4.093 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.784 ## q08_02 0.563 ## q08_03 0.585 ## q29_01 0.503 ## q30_01 0.535 ## q33_01 0.729 ## q35_01 0.682 ## q35_02 0.783 ## q35_03 0.835 ## q35_04 0.818 ## q36_01 0.791 ## q37_01 0.701 ## q38_01 0.752 ## q39_01 0.714 ## q40_01 0.806 ## q28_01 0.591 ## q28_02 0.707 ## q28_03 0.858 ## q28_04 0.818 ## q25_01 0.388 ## q25_02 0.410 ## q25_03 0.359 ## q32_01 0.808 ## q32_02 0.865 ## q32_03 0.840 ## AE2B 0.426 ## AE1B 0.704 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 512 BFM Model Stratified SEM 5 High-Mgmt/Mid-Mgmt fit <- sem(model, d0[d0$Man_Level == "High level management" | d0$Man_Level =="Mid management",], mimic="Mplus", test="satorra.bentler", estimator = "MLM") summary(fit, fit.measures=TRUE, standardized = TRUE, rsquare=TRUE) ## lavaan (0.5-20) converged normally after 41 iterations ## ## Number of observations 329 ## ## Estimator ML Robust ## Minimum Function Test Statistic 920.050 574.535 ## Degrees of freedom 269 269 Appendix E Survey Factor Analysis 513 ## P-value (Chi-square) 0.000 0.000 ## Scaling correction factor 1.601 ## for the Satorra-Bentler correction (Mplus variant) ## ## Model test baseline model: ## ## Minimum Function Test Statistic 6571.836 4202.232 ## Degrees of freedom 300 300 ## P-value 0.000 0.000 ## ## User model versus baseline model: ## ## Comparative Fit Index (CFI) 0.896 0.922 ## Tucker-Lewis Index (TLI) 0.884 0.913 ## ## Loglikelihood and Information Criteria: ## ## Loglikelihood user model (H0) -10816.877 -10816.877 ## Loglikelihood unrestricted model (H1) -10356.852 -10356.852 ## ## Number of free parameters 81 81 ## Akaike (AIC) 21795.754 21795.754 ## Bayesian (BIC) 22103.234 22103.234 ## Sample-size adjusted Bayesian (BIC) 21846.303 21846.303 ## ## Root Mean Square Error of Approximation: ## ## RMSEA 0.086 0.059 ## 90 Percent Confidence Interval 0.080 0.092 0.054 0.064 ## P-value RMSEA <= 0.05 0.000 0.003 ## ## Standardized Root Mean Square Residual: ## ## SRMR 0.049 0.049 ## ## Parameter Estimates: ## ## Information Expected ## Standard Errors Robust.sem ## ## Latent Variables: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE2B =~ ## q08_01 1.000 1.156 0.821 ## q08_02 0.967 0.069 13.977 0.000 1.117 0.833 ## q08_03 0.806 0.066 12.148 0.000 0.931 0.692 ## AE1B =~ Appendix E Survey Factor Analysis 514 ## q29_01 1.000 0.846 0.763 ## q30_01 0.940 0.054 17.423 0.000 0.795 0.708 ## q33_01 1.099 0.061 18.086 0.000 0.929 0.751 ## q35_01 1.293 0.082 15.760 0.000 1.094 0.812 ## q35_02 1.262 0.070 18.043 0.000 1.068 0.815 ## q35_03 1.219 0.072 16.884 0.000 1.032 0.833 ## q35_04 1.309 0.083 15.674 0.000 1.107 0.845 ## q36_01 1.030 0.049 21.201 0.000 0.871 0.792 ## q37_01 1.067 0.068 15.794 0.000 0.902 0.771 ## q38_01 1.064 0.063 16.939 0.000 0.900 0.768 ## q39_01 1.115 0.073 15.174 0.000 0.943 0.796 ## q40_01 1.088 0.053 20.456 0.000 0.920 0.799 ## SIR =~ ## q28_01 1.000 0.973 0.768 ## q28_02 1.130 0.064 17.750 0.000 1.099 0.874 ## q28_03 1.046 0.061 17.100 0.000 1.018 0.843 ## q28_04 0.993 0.066 15.049 0.000 0.966 0.815 ## DL =~ ## q25_01 1.000 0.999 0.734 ## q25_02 1.031 0.064 16.118 0.000 1.029 0.716 ## q25_03 0.954 0.060 15.829 0.000 0.953 0.702 ## q32_01 1.147 0.073 15.816 0.000 1.146 0.823 ## q32_02 1.145 0.086 13.317 0.000 1.144 0.804 ## q32_03 1.180 0.084 14.039 0.000 1.179 0.815 ## ## Regressions: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## AE1B ~ ## SIR 0.511 0.060 8.475 0.000 0.588 0.588 ## DL 0.324 0.070 4.660 0.000 0.383 0.383 ## AE2B ~ ## SIR 0.541 0.085 6.358 0.000 0.455 0.455 ## DL 0.239 0.084 2.837 0.005 0.207 0.207 ## ## Covariances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## SIR ~~ ## DL 0.543 0.087 6.235 0.000 0.559 0.559 ## AE2B ~~ ## AE1B 0.160 0.037 4.311 0.000 0.404 0.404 ## ## Intercepts: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 5.593 0.078 72.092 0.000 5.593 3.975 ## q08_02 5.693 0.074 76.980 0.000 5.693 4.244 ## q08_03 5.584 0.074 75.250 0.000 5.584 4.149 ## q29_01 5.657 0.061 92.567 0.000 5.657 5.103 Appendix E Survey Factor Analysis 515 ## q30_01 5.702 0.062 92.110 0.000 5.702 5.078 ## q33_01 5.511 0.068 80.740 0.000 5.511 4.451 ## q35_01 5.301 0.074 71.365 0.000 5.301 3.934 ## q35_02 5.353 0.072 74.079 0.000 5.353 4.084 ## q35_03 5.492 0.068 80.472 0.000 5.492 4.437 ## q35_04 5.447 0.072 75.369 0.000 5.447 4.155 ## q36_01 5.635 0.061 92.906 0.000 5.635 5.122 ## q37_01 5.626 0.065 87.159 0.000 5.626 4.805 ## q38_01 5.617 0.065 86.947 0.000 5.617 4.794 ## q39_01 5.526 0.065 84.578 0.000 5.526 4.663 ## q40_01 5.657 0.063 89.106 0.000 5.657 4.913 ## q28_01 5.517 0.070 78.976 0.000 5.517 4.354 ## q28_02 5.480 0.069 79.058 0.000 5.480 4.359 ## q28_03 5.620 0.067 84.438 0.000 5.620 4.655 ## q28_04 5.571 0.065 85.226 0.000 5.571 4.699 ## q25_01 5.100 0.075 67.958 0.000 5.100 3.747 ## q25_02 5.046 0.079 63.643 0.000 5.046 3.509 ## q25_03 5.161 0.075 68.950 0.000 5.161 3.801 ## q32_01 5.106 0.077 66.550 0.000 5.106 3.669 ## q32_02 5.152 0.078 65.660 0.000 5.152 3.620 ## q32_03 5.176 0.080 64.937 0.000 5.176 3.580 ## AE2B 0.000 0.000 0.000 ## AE1B 0.000 0.000 0.000 ## SIR 0.000 0.000 0.000 ## DL 0.000 0.000 0.000 ## ## Variances: ## Estimate Std.Err Z-value P(>|z|) Std.lv Std.all ## q08_01 0.644 0.083 7.786 0.000 0.644 0.325 ## q08_02 0.551 0.079 7.004 0.000 0.551 0.306 ## q08_03 0.944 0.141 6.716 0.000 0.944 0.521 ## q29_01 0.513 0.050 10.184 0.000 0.513 0.417 ## q30_01 0.629 0.049 12.707 0.000 0.629 0.499 ## q33_01 0.669 0.066 10.198 0.000 0.669 0.436 ## q35_01 0.619 0.049 12.596 0.000 0.619 0.341 ## q35_02 0.577 0.042 13.642 0.000 0.577 0.336 ## q35_03 0.468 0.038 12.191 0.000 0.468 0.305 ## q35_04 0.492 0.041 11.885 0.000 0.492 0.287 ## q36_01 0.451 0.033 13.678 0.000 0.451 0.373 ## q37_01 0.557 0.037 14.934 0.000 0.557 0.406 ## q38_01 0.563 0.050 11.158 0.000 0.563 0.410 ## q39_01 0.515 0.042 12.160 0.000 0.515 0.367 ## q40_01 0.479 0.071 6.712 0.000 0.479 0.361 ## q28_01 0.658 0.081 8.098 0.000 0.658 0.410 ## q28_02 0.372 0.039 9.661 0.000 0.372 0.235 ## q28_03 0.421 0.043 9.734 0.000 0.421 0.289 ## q28_04 0.472 0.049 9.657 0.000 0.472 0.336 Appendix E Survey Factor Analysis 516 ## q25_01 0.855 0.064 13.451 0.000 0.855 0.462 ## q25_02 1.008 0.098 10.283 0.000 1.008 0.488 ## q25_03 0.935 0.053 17.621 0.000 0.935 0.507 ## q32_01 0.625 0.063 9.860 0.000 0.625 0.323 ## q32_02 0.717 0.086 8.308 0.000 0.717 0.354 ## q32_03 0.701 0.069 10.202 0.000 0.701 0.336 ## AE2B 0.861 0.147 5.842 0.000 0.645 0.645 ## AE1B 0.183 0.030 6.058 0.000 0.256 0.256 ## SIR 0.947 0.117 8.123 0.000 1.000 1.000 ## DL 0.998 0.122 8.172 0.000 1.000 1.000 ## ## R-Square: ## Estimate ## q08_01 0.675 ## q08_02 0.694 ## q08_03 0.479 ## q29_01 0.583 ## q30_01 0.501 ## q33_01 0.564 ## q35_01 0.659 ## q35_02 0.664 ## q35_03 0.695 ## q35_04 0.713 ## q36_01 0.627 ## q37_01 0.594 ## q38_01 0.590 ## q39_01 0.633 ## q40_01 0.639 ## q28_01 0.590 ## q28_02 0.765 ## q28_03 0.711 ## q28_04 0.664 ## q25_01 0.538 ## q25_02 0.512 ## q25_03 0.493 ## q32_01 0.677 ## q32_02 0.646 ## q32_03 0.664 ## AE2B 0.355 ## AE1B 0.744 semPaths(fit,intercepts=FALSE,"std", edge.label.cex = 0.5, exoVar = FALSE, exoCov = FALSE) Appendix E Survey Factor Analysis 517 References 518 References Abdi, H. (2003). Factor rotations in factor analyses. Encyclopedia for Research Methods for the Social Sciences. Sage, Thousand Oaks, CA, 792-795. Aboelela, S. W., Larson, E., Bakken, S., Carrasquillo, O., Formicola, A., Glied, S. A., & Gebbie, K. M. (2007). Defining interdisciplinary research: Conclusions from a critical review of the literature. Health services research, 42(1), 329-346. Adams, L. A., & Courtney, J. F. (2004). Achieving relevance in IS research via the DAGS framework. Paper presented at the System Sciences. Proceedings of the 37th Annual Hawaii International Conference on Systems Sciences. Retrieved http://ieeexplore.ieee.org/abstract/document/1265615. Ahmed, M. D., & Sundaram, D. (2012). Sustainability modelling and reporting: From roadmap to implementation. Decision Support Systems, 53(3), 611-624. Arsanjani, A., Zhang, L-J., Ellis, M., Allam, A., & Channabasavaiah, K. (2014). Design an SOA solution using a reference architecture. IBM developerWorks. Aliaga, M., & Gunderson, B. (2000). Introduction to Quantitative research. Sage, London. Allen, I. E., & Seaman, C. A. (2007). Likert scales and data analyses. Quality progress, 40(7), 64. Al-Mashari, M., & Al-Mudimigh, A. (2003). ERP implementation: lessons from a case study. Information Technology & People, 16(1), 21-33. Andersen, T. J., & Nielsen, B. B. (2009). Adaptive strategy making: The effects of emergent and intended strategy modes. European Management Review, 6(2), 94-106. Andrews, K. R. (1971). The Concept of Corporate Strategy. Dow Jones-Irwin, New York. Anthony, R. N. (1965). Planning and Control Systems: A Framework for Analysis. Cambridge MA: Harvard University Press. Anthony, R. N. (1988). The management control function. Harvard Business School Press. Argyris, C., & Schon, D. (1978). Organizational learning: A theory of action approach. Reading, MA: Addision Wesley. Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural equation modeling: a multidisciplinary journal, 16(3), 397-438. Austrom, D., de Guerre, D., Maupin, H., McGee, C., Mohr, B., Norton, J., & Ordowich, C. (2012). Sociotechnical systems Design for adaptive enterprises. Paper presented at the Organizational Development Forum, April. Baker, J., Lovell, K., & Harris, N. (2006). How expert are the experts? An exploration of the concept of ‘expert’ within Delphi panel techniques. Nurse researcher, 14(1), 59-70. Baltzan, P., & Phillips, A. (2008). Business driven information systems. (2nd ed.) McGrawHill/Irwin, Illinois, USA. References 519 Baskerville, R. L., Kaul, M., & Storey, V. C. (2015). Genres of inquiry in design-science research: Justification and evaluation of knowledge production. MIS quarterly, 39(3), 541-564. Benner, J., & Tushman, L. (2003). Exploitation, exploration, and process management: The productivity dilemma revisited. Academy of Management. The Academy of Management Review, 28(2), 238. Benner, M. (2009). Dynamic or Static Capabilities? Process Management Practices and Response to Technological Change. The Journal of Product Innovation Management, 26(5), 473. Billington, C., & Davidson, R. (2008). Want to improve your below-average business processes? - innovate don't invent. Perspectives for Managers (159), 1. Birkinshaw, J. (2006). An even-handed response to an uncertain context. FT Partnership Publications. Bischoff, T. (2008). Which Types of Tools you really need for Business Process Management (BPM). Retrieved from https://blogs.sap.com/2008/06/17/which-types-of-tools-youreally-need-for-business-process-management-bpm. Bolger, F., & Wright, G. (2011). Improving the Delphi process: lessons from social psychological research. Technological Forecasting and Social Change, 78(9), 1500-1513. Bollen, K., & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological bulletin, 110(2), 305. Bonnet, D., & Yip, G. (2009). Strategy convergence. Business Strategy Review, 20(2), 50. Bontis, N., Crossan, M. M., & Hulland, J. (2002). Managing an organizational learning system by aligning stocks and flows. Journal of management studies, 39(4), 437-469. Boulkedid, R., Abdoul, H., Loustau, M., Sibony, O., & Alberti, C. (2011). Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS one, 6(6). Brace, I. (2008). Questionnaire design: How to plan, structure and write survey material for effective market research. Hoboken, New Jersey, Kogan Page Publishers. Bradburn, N. M., Sudman, S., Blair, E., Locander, W., Miles, C., Singer, E., & Stocking, C. (1992). Improving interview method and questionnaire design: Response effects to threatening questions in survey research. University Microfilms. Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: the definitive guide to questionnaire design--for market research, political polls, and social and health questionnaires. John Wiley & Sons. Brown, S. L., & Eisenhardt, K. M. (1997). The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations. Administrative science quarterly, 42(1), 1-34. Brown, S. L., & Eisenhardt, K. M. (1998). Competing on the edge: Strategy as structured chaos. Harvard Business Press. References 520 Brüggen, E., & Willems, P. (2009). A critical comparison of offline focus groups, online focus groups and e-Delphi. International Journal of Market Research, 51(3), 363-381. Bryan, L. L., & Joyce, C. I. (2007). Better strategy through organizational design. McKinsey Quarterly, 2(07), 21-29. Bryman, A. (2015). Social research methods. Oxford university press. Buck-Emden, R., & Buck-Emden, D. R. (2000). The SAP R/3 system: an introduction to ERP and business software technology. Addison-Wesley. Burnard, K., & Bhamra, R. (2011). Organisational resilience: development of a conceptual framework for organisational responses. International Journal of Production Research, 49(18), 5581-5599. Byrne, B. M. (2013). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Routledge, Taylor & Francis Group, New York. Chan, W., & Mauborgne, R. (2009). How Strategy Shapes Structure. Harvard business review, 87(9), 72. Chasan, M. (2014). Situationally Adaptive Organizations for 21st Century Transformation and Thriving. Retrieved from Retrieved from http://www.huffingtonpost.com/markchasan/situationally-adaptive-organizations-for-21st-century-thriving_b_5424763.html. Childe, S. J., Maull, R. S., & Bennett, J. (1994). Frameworks for understanding business process re-engineering. International Journal of Operations & Production Management, 14(12), 22-34. Christensen, C. R., Andrews, K. R., & Bower, J. L. (1978). Business policy: text and cases, fourth edition. Irwin, Homewood, IL. Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological assessment, 7(3), 309. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple regression/correlation analysis for the behavioral sciences, (3rd ed.). Mahwah, NJ, Erlbaum. Collis, J., & Hussey, R. (2013). Business research: A practical guide for undergraduate and postgraduate students. Palgrave Macmillan, New York. Coltman, T., Devinney, T. M., Midgley, D. F., & Venaik, S. (2008). Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research, 61(12), 1250-1262. Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire. Sage Publications, Newbury Park, London. Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65(2), 230-253. Cousineau, D., & Chartier, S. (2015). Outliers detection and treatment: a review. International Journal of Psychological Research, 3(1), 58-67. References 521 Cravens, D., Piercy, N., & Baldauf, A. (2009). Management framework guiding strategic thinking in rapidly changing markets. Journal of Marketing Management, 25(1/2), 31. Creswell, J. W., & Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Los Angeles, SAGE Publications. Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological methods, 1(1), 16. Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The Modified Delphi Technique--A Rotational Modification. Journal of vocational and technical education, 15(2), 50-58. Cyphert, F. R., & Gant, W. L. (1971). The delphi technique: A case study. Phi Delta Kappan, 52(5), 272-273. Daft, R. L., & Weick, K. E. (1984). Toward a model of organizations as interpretation systems. Academy of management review, 9(2), 284-295. Dale, S. (2007). Holistic BPM: From Theory to Reality, 5th International Conference, BPM 2007, Brisbane, Australia, September 24–28. Dalkey, N., Brown, B., & Cochran, S. (1970). The delphi method, IV: Effect of percentile feedback and feed-in of relevant facts, Defense Technical Information Center. The Rand Corportation, Santa Monica, CA. Dalkey, N. C., Brown, B. B., & Cochran, S. (1969). The Delphi method: An experimental study of group opinion (Vol. 3). Rand Corporation, Santa Monica, CA. Day, J., & Bobeva, M. (2005). A generic toolkit for the successful management of Delphi studies. The Electronic Journal of Business Research Methodology, 3(2), 103-116. Delbecq, A. L., Van de Ven, A. H., & Gustafson, D. H. (1975). Group techniques for program planning: A guide to nominal group and Delphi processes, Scott, Foresman Glenview, IL. Diamond, I. R., Grant, R. C., Feldman, B. M., Pencharz, P. B., Ling, S. C., Moore, A. M., & Wales, P. W. (2014). Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. Journal of Clinical Epidemiology, 67(4), 401-409. Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed). John Wiley & Sons, Hoboken, New Jersey. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixedmode surveys: the tailored design method. John Wiley & Sons, Hoboken, NJ Dong, C. S., Peko, G. M., & Sundaram, D. (2013). An Implementation of Agent-enabled Distributed Decision Support Systems for Adaptive Business Networks. In Jain, R.K; Metri, B.A; Gupta, J.N.D. (Eds.), Operational Excellence: A Key for Performance Excellence, New Delhi, Excel Books. References 522 Dong, J., Peko, G., & Sundaram, D. (2012). Supply Chain-oriented Decision Support Systems: An Agent-Enabled Approach. In Badri, T.N; Gupta, J.N. (Eds.), Enhancing Competitiveness using Decision Sciences. India, Macmillan Publishers India Ltd, p304- 311. Dooley, K. J. (1997). A complex adaptive systems model of organization change. Nonlinear dynamics, psychology, and life sciences, 1(1), 69-97. Dziuban, C. D., & Shirkey, E. C. (1974). When is a correlation matrix appropriate for factor analysis? Some decision rules. Psychological bulletin, 81(6), 358. Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: An integrative analytical framework. Organizational research methods, 4(2), 144-192. Eisenhardt, K. M., & Brown Shona, L. (1998). Competing on the edge: Strategy as structured chaos. Long Range Planning, 31(5), 786. Eisenhardt, K. M., & Tabrizi, B. N. (1995). Accelerating adaptive processes: Product innovation in the global computer industry. Administrative science quarterly, 84-110. Fackrell, K., Hall, D. A., Barry, J. G., & Hoare, D. J. (2016). Psychometric properties of the Tinnitus Functional Index (TFI): Assessment in a UK research volunteer population. Hearing research, 335, 220–235. Farrugia, P., Petrisor, B. A., Farrokhyar, F., & Bhandari, M. (2010). Research questions, hypotheses and objectives. Canadian Journal of Surgery, 53(4), 278–281. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Thousand Oaks, California Sage publications. Fink, A. (2015). How to conduct surveys: A step-by-step guide. Thousand Oaks, California Sage publications. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, 39-50. Förster, B., & von der Gracht, H. (2014). Assessing Delphi panel composition for strategic foresight - A comparison of panels based on company-internal and external participants. Technological Forecasting and Social Change, 84, 215-229. Forster, F. (2006). The Idea behind Business Process Improvement: Toward a Business Process Improvement Pattern Framework. Business Process Trend Advisor (April). Forza, C. (2002). Survey research in operations management: a process-based perspective. International Journal of Operations and Production Management, 22(2), 152-194. Fowler, F. J. (1995). Improving survey questions: Design and evaluation. Applied Social Research Methods Series (Vol. 38): Sage Publications, Thousand Oaks, London. Fowler Jr, F. J. (2014). Survey research methods (5th ed.). Thousand Oaks, California Sage publications. References 523 Fowler Jr, F. J., & Cosenza, C. (2009). Design and evaluation of survey questions. In L. Bickman, & D. J. Rog (Eds.), The SAGE handbook of applied social research methods, 375-412. Thousand Oaks, CA, Sage. Fowler, H. W., Fowler, F. G., & Crystal, D. (2011). The Concise Oxford Dictionary: The Classic First Edition. Oxford University Press. Friedman, H. H., & Amoo, T. (1999). Rating the rating scales. Journal of Marketing Management, Winter, 114-123. Fritz, R. (1996). Corporate tides: the inescapable laws of organizational structure. BerrettKoehler Publishers, San Francisco. Furr, M. (2011). Scale construction and psychometrics for social and personality psychology. SAGE Publications Ltd., London. Giachetti, R. E. (2016). Design of enterprise systems: Theory, architecture, and methods. CRC Press, Taylor & Francis Group, New York. Giannarou, L., & Zervas, E. (2014). Using Delphi technique to build consensus in practice. International Journal of Business Science and Applied Management, 9(2). Gibson, C. B., & Birkinshaw, J. (2004). The antecedents, consequences, and mediating role of organizational ambidexterity. Academy of Management Journal, 47(2), 209-226. Gillham, B. (2008). Developing A Questionnaire, (2nd ed.). Continuum International Publishing Group, London. Gliem, R. R., & Gliem, J. A. (2003). Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. Midwest Research to Practice Conference in Adult, Continuing, and Community Education. Columbus, OH. Gollenia, L. A. (2016). Business Transformation Management Methodology. Routledge, Taylor & Francis Group, New York. Gordon, T. J. (2003). The Delphi method. In. Glenn J.C., Gordon, T.J. (Eds.), Futures Research Methodology Version 2.0, American Council for the United Nations University, Washington. Gorry, G. A., & Scott Morton, M. S. (1989). A framework for management information systems (Vol. 13). Massachusetts Institute of Technology. Gothelf, J., & Seiden, J. (2017). Sense and Respond: How Successful Organizations Listen to Customers and Create New Products Continuously. Boston, Massachusetts Harvard Business School Publishing. Graetz, F. (2002). Strategic thinking versus strategic planning: towards understanding the complementarities. Management Decision, 40(5), 456-462. Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research, Handbook of qualitative research, 2, 163-194. Hackathorn, R. (2004). Real-time to real-value. Information Management, 14(1), 24. Haeckel, S. H. (1995). Adaptive enterprise design: the sense-and-respond model. Planning Review, 23(3), 6-42. References 524 Haeckel, S. H. (2004). Peripheral vision: Sensing and acting on weak signals: Making meaning out of apparent noise: The need for a new managerial framework. Long Range Planning, 37(2), 181-189. Haeckel, S. H. (2016). Adaptive enterprise: Creating and leading sense-and-respond organizations (Second Publication edition). CreateSpace Independent Publishing Platform. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis (Vol. 7). Harlow, Essex, Pearson Eduacation Ltd. Hamel, G., & Prahalad, C. K. (1990). Corporate imagination and expeditionary marketing. Harvard business review, 69(4), 81-92. Hamel, G., & Prahalad, C. K. (2005). Strategic Intent. Harvard business review, 83(7,8), 148. Hasson, F., & Keeney, S. (2011). Enhancing rigour in the Delphi technique research. Technological Forecasting and Social Change, 78(9), 1695-1704. Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of advanced nursing, 32(4), 1008-1015. Heinrich, C. E., & Betts, B. (2003). Adapt or die: transforming your supply chain into an adaptive business network, John Wiley & Sons Inc., Hoboken, New Jersey. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115-135. Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. Advances in international marketing, 20(1), 277- 319. Herrmann, M. R. G. (2006). Why SOA lacks reuse. JBoss World. Retrieved from http://jbossworld.com/jbwv_2006/soa_for_ the_-real_world/ HERRMANN_COLayer_final.pdf. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. Management Information Systems Quarterly, 28(1), 75-105. Hickson, D. J. (1987). Decision-making at the top of organizations. Annual review of sociology, 13(1), 165-192. Hill, K. Q., & Fowles, J. (1975). The methodological worth of the Delphi forecasting technique. Technological Forecasting and Social Change, 7(2), 179-192. Hinkin, T. R., Tracey, J. B., & Enz, C. A. (1997). Scale construction: Developing reliable and valid measurement instruments. Journal of Hospitality & Tourism Research, 21(1), 100- 120. Hsu, C. C., & Sandford, B. A. (2007). The Delphi technique: making sense of consensus. Practical Assessment, Research & Evaluation, 12(10), 1-8. References 525 Hu, L. & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal, 6(1), 1-55. Hung, H. L., Altschuld, J. W., & Lee, Y. F. (2008). Methodological and conceptual issues confronting a cross-country Delphi study of educational program evaluation. Evaluation and Program Planning, 31(2), 191-198. Iarossi, G. (2006). The power of survey design: A user's guide for managing surveys, interpreting results, and influencing respondents. World Bank Publications, Washington, D.C. IDS Sheer Software AG, (2009). Enterprise SOA with SAP NetWeaver Business Process Management Roadmap. Keen, M., Ackerman, G., Azaz, I., Haas, M., Johnson, R., Kim, J.W., & Robertson, P. (2006). Patterns: SOA Foundation - Business Process Management Scenario. IBM Redbooks. Jackson, D. (2016). Dynamic organisations: the challenge of change. New York, United States, Springer. Jackson, D. L., Gillaspy Jr, J. A., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory factor analysis: an overview and some recommendations. Psychological methods, 14(1), 6. Jeston, J. N., & Nelis, J. (2009). Business process management: practical guidelines to successful implementations, (2nd ed.). Oxford, Elsevier/Butterworth-Heinemann. Jones, G. R. (2013). Organizational theory, design, and change (7th ed.). Pearson Education, Essex, England. Joseph, L. B., & Clark, G. G. (2007). How Managers' Everyday Decisions Create or Destroy Your Company's Strategy. Harvard business review, 85(2), 72. Kalakota, R., & Robinson, M. (2003). Services Blueprint: Roadmap for Execution, AddisonWesley Professional, Boston, USA. Kaplan, B., & Duchon, D. (1988). Combining qualitative and quantitative methods in information systems research: a case study. MIS quarterly, 571-586. Kaplan, B., & Maxwell, J. A. (2005). Qualitative research methods for evaluating computer information systems. In Anderson, J. G., Aydin, C. E. (Eds.), Evaluating the organizational impact of healthcare information systems (2nd ed.). Health Informatics Series (pp. 30-55). Springer, USA. Keen, P. G. W. (1997). The process edge: creating value where it counts, Harvard Business School Press. Keeney, S., Hasson, F., & McKenna, H. (2006). Consulting the oracle: ten lessons from using the Delphi technique in nursing research. Journal of advanced nursing, 53(2), 205-212. Kettinger, W. J., Teng, J. T. C., & Guha, S. (1997). Business process change: a study of methodologies, techniques, and tools. Management Information Systems Quarterly, 21(1), 55-80. References 526 Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS quarterly, 67-93. Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford publications, New York. Klueckmann, J. (2008). SOA Management with ARIS: From Service Architecture Management to Service Execution. ARIS UserDay08. Koetter, F., & Kochanowski, M. (2015). A model-driven approach for event-based business process monitoring. Information Systems and e-Business Management, 13(1), 5-36. Korhonen, J. J., Lapalme, J., McDavid, D., & Gill, A. Q. (2016). Adaptive Enterprise Architecture for the Future: Towards a Reconceptualization of EA. Paper presented at the Business Informatics (CBI), IEEE 18th Conference on Business Informatics. Kotter, J. (2012). The big Idea Accelerate. Harvard business review, 90(11), 43-58. Kroenke, D. M., & Boyle, R. J. (2015). Using MIS. Prentice Hall Press, Upper Saddle River, NJ, USA. Krosnick, J. A., & Fabrigar, L. R. (1997). Designing rating scales for effective measurement in surveys. Survey measurement and process quality, 141-164. Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., & Smith, V. K. (2002). The impact of "no opinion" response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66(3), 371-403. Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In Marsden, P.V., Wright, J. D. (Eds.), Handbook of survey research (2nd ed.), 263-314. Kumar, R. (2014). Research Methodology: a step-by-step guide for beginners (4th ed.). South Melbourne, Addison Wesley Longman Australia Pty Limited. Kumar, R. (1996). Research Methodology: a step-by-step guide for beginners. Addison Wesley Longman Australia Pty Limited. Kumaran, S., Bishop, P., Chao, T., Dhoolia, P., Jain, P., Jaluka, R., & Nigam, A. (2007). Using a model-driven transformational approach and service-oriented architecture for service delivery management. IBM Systems Journal, 46(3), 513. Labovitz, G., & Rosansky, V. (1997). The power of alignment: How great companies stay centered and accomplish extraordinary things. Wiley & Sons, Inc. New York, USA. Landeta, J. (2006). Current validity of the Delphi method in social sciences. Technological Forecasting and Social Change, 73(5), 467-482. Langley, A., Mintzberg, H., Pitcher, P., Posada, E., & Saint-Macary, J. (1995). Opening up decision making: The view from the black stool. Organization science, 260-279. Lasher, W. R. (2005). Process to profits: strategic planning for a growing business. SouthWestern Educational Pub. Lei, M., & Lomax, R. G. (2005). The effect of varying degrees of nonnormality in structural equation modeling. Structural equation modeling, 12(1), 1-27. References 527 Levchuk, G. M., Levchuk, Y. N., Luo, J., Pattipati, K. R., & Kleinman, D. L. (2002). Normative design of organizations. I. Mission planning. IEEE Transactions on Systems, Man, and Cybernetics-Part A. Systems and Humans, 32(3), 346-359. Levitt, B., & March, J. G. (1988). Organizational learning. Annual review of sociology, 319- 340. Lewin, K. (1958). Group Decision and Social Change in Readings in Social Psychology. In Maccoby, E. E., Newcomb, T. H., Hartley, H. (Eds.), Readings in Social Psychology. Holt, Rinehart & Winston, New York. Lewis, A., & Loebbaka, J. (2008). Managing future and emergent strategy decay in the commercial aerospace industry. Business Strategy Series, Vol. 9 Issue 4, pp.147-156. Leybourn, E. (2013). Directing the Agile organisation: A lean approach to business management. IT Governance Publishing, UK. Liedtka, J. (1998). Linking strategic thinking with strategic planning. Strategy & Leadership, 26(4), 30. Liedtka, J. (2006). Is your strategy a duck? The Journal of Business Strategy, 27(5), 32. Linstone, H. A., & Turoff, M. (1975). The Delphi method: Techniques and applications. Boston, United States of America: Addison-Wesley Publishing. Linstone, H. A., & Turoff, M. (2002). The Delphi Method. Techniques and applications. Newark, NJ, New Jersey Institute of Technology. Linstone, H. A., & Turoff, M. (2011). Delphi: A brief look backward and forward. Technological Forecasting and Social Change, 78(9), 1712-1719. Lowry, P. B., D’Arcy, J., Hammer, B., & Moody, G. D. (2016). “Cargo Cult” science in traditional organization and information systems survey research: A case for using nontraditional methods of data collection, including Mechanical Turk and online panels. The Journal of Strategic Information Systems, 25(3), 232-240. Ludwig, B. (1997). Predicting the future: Have you considered using the Delphi methodology. Journal of extension, 35(5), 1-4. Mackenzie, N., Knipe, S. (2006). Research dilemmas: Paradigms, methods and methodology. Issues in educational research, 16(2), 193-205. Majchrzak, A., Logan, D., McCurdy, R., & Kirchmer, M. (2006). What Business Leaders Can Learn from Jazz Musicians About Emergent Processes. In Scheer, A.W., Kruppke, H., Jost, W., Kindermann, H. (Eds.). Agility by ARIS Business Process Management. Berlin, New York. Malhotra, M. K., & Grover, V. (1998). An assessment of survey research in POM: from constructs to theory. Journal of operations management, 16(4), 407-425. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization science, 2(1), 71-87. Marjanovic, O. (2005). Towards IS supported coordination in emergent business processes. Business Process Management Journal, 11(5), 476-487. References 528 Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A design theory for systems that support emergent knowledge processes. MIS quarterly, 179-212. Mathiesen, P., Watson, J., Bandara, W., & Rosemann, M. (2012). Applying social technology to business process lifecycle management. Paper presented at the Business Process Management Workshops. Springer, Berlin, 2012, pp. 231–241. Meijering, J. V., & Tobi, H. (2016). The effect of controlled opinion feedback on Delphi features: Mixed messages from a real-world Delphi experiment. Technological Forecasting and Social Change, 103, 166-173. Meredith, J. R., Raturi, A., Amoako-Gyampah, K., & Kaplan, B. (1989). Alternative research paradigms in operations. Journal of operations management, 8(4), 297-326. Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative Data Analysis. A Methods Sourcebook (3rd ed.). Thousand Oaks, California Sage Publications Millar, K., Tomkins, S., Thorstensen, E., Mepham, B. and Kaiser, M. 2006, Ethical Delphi Manual. Agricultural Economics Research Institute (LEI), The Hague. Mingers, J. (2001). Combining IS research methods: towards a pluralist methodology. Information systems research, 12(3), 240-259. Mingers, J., & Brocklesby, J. (1997). Multimethodology: towards a framework for mixing methodologies. Omega, 25(5), 489-509. Mintzberg, H. (1983). Power in and around organizations, Prentice-Hall, Englewood Cliffs, N J. Mintzberg, H. (1987). The strategy concept I: five Ps for strategy. California Management Review, 30(1), 11-24. Mintzberg, H. (1994). The fall and rise of strategic planning. Harvard business review, 72, 107- 107. Mintzberg, H., Ahlstrand, B., & Lampel, J. (2005). Strategy bites back, Strategy Bites Back: It Is a Lot More, and Less, Than You Ever Imagined. Pearson Prentice-Hall, Englewood Cliffs, NJ. Mintzberg, H., Quinn, J. B., & Ghoshal, S. (1998). The Strategy Process (Revised European Edition), Hertfordshire, Prentice Hall Europe. Mintzberg, H., & Waters, J. A. (1985). Of strategies, deliberate and emergent. Strategic management journal, 257-272. Mintzberg, M., & Quinn, J. B. (1996). The strategy process: concepts, contexts, cases, Prentice Hall, NJ. Mitchell, V. W. (1991). The Delphi technique: An exposition and application. Technology Analysis & Strategic Management, 3(4), 333-358. Moitra, D., & Ganesh, J. (2005). Web services and flexible business processes: towards the adaptive enterprise. Information & Management, 42(7), 921-933. References 529 Moore, K. (2011). Porter or Mintzberg: whose view of strategy is the most relevant today. Forbes. Retrieved https://www.forbes.com/sites/karlmoore/2011/03/28/porter-ormintzberg-whose-view-of-strategy-is-the-most-relevant-today. Moskowitz, H., & Wright, G. P. (1985). Statistics for management and economics, Merrill Publishing Co., Columbus, Ohio, USA. Myers, M. D. (1997). Qualitative research in information systems. Management Information Systems Quarterly, 21(2), 241-242. Myers, M. D. (2013). Qualitative research in business and management: Sage, London. Myers, M. D., & Klein, H. K. (2011). A Set of Principles for Conducting Critical Research in Information Systems. MIS quarterly, 35(1), 17-36. Novakowski, N., & Wellar, B. (2008). Using the Delphi technique in normative planning research: methodological design considerations. Environment and Planning A, 40(6), 1485-1500. Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: what can be done? Assessment & evaluation in higher education, 33(3), 301-314. Nunamaker Jr, J., Chen, M., & Purdin, T. (1990). Systems development in information systems research. Journal of management information systems, 7(3), 89-106. Ocasio, W., & Joseph, J. (2008). Rise and Fall - or Transformation? The Evolution of Strategic Planning at the General Electric Company, 1940-2006. Long Range Planning, 41(3), 248. Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: an example, design considerations and applications. Information & Management, 42(1), 15-29. Oracle (2006). Oracle SOA Suite Developer's Guide, Oracle. Retrieved http://sqltech.cl/doc/oas10gR31/core.1013/b28764/intro003.htm. Osborne, J. W., & Costello, A. B. (2009). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pan-Pacific Management Review, 12(2), 131-146. Osborne, J. W., & Overbay, A. (2004). The power of outliers (and why researchers should always check for them). Practical Assessment, Research & Evaluation, 9(6), 1-12. Pannirselvam, G. P., Ferguson, L. A., Ash, R. C., & Siferd, S. P. (1999). Operations management research: an update for the 1990s. Journal of operations management, 18(1), 95-112. Pava, C. H. (1983). Managing new office technology: An organizational strategy. Free Press, Division of Macmillan, New York. Peko, G., Dong, C. S., Rohde, M., & Sundaram, D. (2012). Cloud-Based Contextually Aware Adaptive Systems for Enterprise Transformation Virtual and Networked Organizations, Emergent Technologies and Tools (pp. 1-11), Springer. Peko, G., Dong, C. S., & Sundaram, D. (2014). Adaptive Sustainable Enterprises. Mobile Networks and Applications, 19(5), 608-617. References 530 Peterson, R. A., & Kim, Y. (2013). On the relationship between coefficient alpha and composite reliability. Journal of Applied Psychology, 98(1), 194. Peyret, H. (2007). The Forrester Wave: Enterprise Architecture Tools, Q2 2007. Retrieved https://www.forrester.com/report/The+Forrester+Wave+Enterprise+Architecture+T ools+Q2+2007/-/E-RES41915. Peyret, H., Cullen, A., & Roeleveld-Hoekendijk, C. (2007). Assess Your Enterprise Agility. Retrieved from Peyret, H. The Forrester Wave: Enterprise Architecture Tools, Q2. https://www.forrester.com/report/The+Forrester+Wave+Enterprise+Architecture+T ools+Q2+2007/-/E-RES41915. Pinsonneault, A., & Kraemer, K. (1993). Survey Research Methodology in Management Information Systems: an Assessment. Journal of management information systems, 10(2), 75-105. Porter, M. E. (1980). Competitive strategy: Techniques for analyzing industries and competitors. The Free Press, Division of Simon & Schuster Inc., New York. Porter, M. E. (1998). The competitive advantage of nations: with a new introduction. Free Press, Division of Macmillan, Basingstoke, London. Porter, M. E. (2002). How competitive forces shape strategy. Strategy, Critical Perspectives on Business and Management, 137, 3. Porter, M. E. (2008). The five competitive forces that shape strategy. Harvard business review, 86(1), 78. Portougal, V., & Sundaram, D. (2006). Business processes: operational solutions for SAP implementation. IRM Press, London. Powell, C. (2003). The Delphi technique: myths and realities. Journal of advanced nursing, 41(4), 376-382. Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior research methods, instruments, & computers, 36(4), 717-731. Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta psychologica, 104(1), 1-15. Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questions. Public opinion quarterly, 68(1), 109-130. Quinn, J. B. (1980). Managing strategic change. Sloan Management Review, 21(4), 3-20. Quinn, J. B. (1985). Innovation and corporate strategy: Managed chaos. Technology in society, 7(2-3), 263-279. Quinn, J. B. (1989). Managing strategic change. In Asch, D. & Bowman, C. (Eds.), Readings in Strategic Management. Macmillan, London, 20-36. References 531 Rea, L. M., & Parker, R. A. (2014). Designing and conducting survey research: A comprehensive guide: Jossey-Boss, John Wiley & Sons, San Francisco. Reefke, H. (2012). Sustainable Supply Chains Management: Concepts, models and roadmap. (Unpublished doctoral thesis). University of Auckland, Auckland, New Zealand. Reefke, H., & Sundaram, D. (2017). Key themes and research opportunities in sustainable supply chain management–identification and evaluation. Omega, 66, 195-211. Reeves, M., & Deimler, M., (2013). Adaptability: The new competitive advantage. Own the Future: 50 Ways to Win, from the Boston Consulting Group, 19-26. Reeves, M., Zeng, M., & Venjara, A. (2015). The Self-Tuning Enterprise. Harvard business review, 93(6), 77-83. Reja, U., Manfreda, K. L., Hlebec, V., & Vehovar, V. (2003). Open-ended vs. close-ended questions in web questionnaires. Developments in applied statistics, 19, 159-177. Rosemann, M. (2001). Business Process Lifecycle Management. Queensland University of Technology, March 2001, 1-29. Rowe, G., Wright, G., & Bolger, F. (1991). Delphi: a reevaluation of research and theory. Technological Forecasting and Social Change, 39(3), 235-251. Rowe, G., & Wright, G. P. (1999). The Delphi technique as a forecasting tool: issues and analysis. International journal of forecasting, 15(4), 353-375. Rowe, G., & Wright, G. P. (2001). Expert opinions in forecasting: the role of the Delphi technique. In Armstrong, J.S. (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners. Kluwer Academic Publishers, Boston, pp. 125–144. Rowe, G., & Wright, G. P. (2011). The Delphi technique: Past, present, and future prospects— Introduction to the special issue. Technological Forecasting and Social Change, 78(9), 1487-1490. Rowe, G., Wright, G. P., & McColl, A. (2005). Judgment change during Delphi-like procedures: The role of majority influence, expertise, and confidence. Technological Forecasting and Social Change, 72(4), 377-399. Sadagopan, S. (2014). Management Information Systems (2nd ed.). PHI Learning Pvt. Ltd., New Delhi, 23-210. Sanders, N. R., Fugate, B. S., & Zacharia, Z. G. (2016). Interdisciplinary Research in SCM. Through the Lens of the Behavioral Theory of the Firm. Journal of Business Logistics SAP (2000a). AcceleratedSAP, SAP. SAP (2000b). Strategic Enterprise Management, SAP. Satorra, A., & Bentler, P. M. (2001). A scaled difference chi-square test statistic for moment structure analysis. Psychometrika, 66(4), 507-514. Schafer, J. L., & Graham, J. W. (2002). Missing data: our view of the state of the art. Psychological methods, 7(2), 147. References 532 Scheer, A.W. (2003). Jazz Improvisation and Management. In Scheer, A. W., Abolhassan, F. Jost, W. & Kirchmer, M. (Eds.), Business Process Change Management. ARIS in Practice, Springer, New York. pp.270-286. Scheer, A. W. (2007). Jazz Improvisation and Management, IDS Scheer AG, Retrieved http://whitepaper.talentum.com/whitepaper/view.do?id=21050. Scheer, A.W., Kruppke, H., Jost, W., & Kindermann, H. (2006). Agility by ARIS business process management: yearbook business process excellence 2006/2007 (Vol. 243): Springer Science & Business Media. Schmidt, R. C. (1997). Managing delphi surveys using nonparametric statistical techniques. decision Sciences, 28(3), 763-774. Schoenherr, T., Ellram, L. M., & Tate, W. L. (2015). A note on the use of survey research firms to enable empirical data collection. Journal of Business Logistics, 36(3), 288-300. Schulz, M. (2001). The uncertain relevance of newness: Organizational learning and knowledge flows. Academy of Management Journal, 44(4), 661-681. Schumacker, R. E., & Lomax, R. G. (2004). A beginner's guide to structural equation modeling (2nd ed.). Lawrence Erlbaum Associates Inc., Publishers New Jersey. Scott Morton, M. S. (1991). The Corporation of the 1990s. Oxford University Press. Scott, W. R., & Davis, G. F. (2015). Organizations and organizing: Rational, natural and open systems perspectives. Routledge, Taylor & Francis Group, New York. Sedera, D., Lokuge, S., Grover, V., Sarker, S., & Sarker, S. (2016). Innovating with enterprise systems and digital platforms: A contingent resource-based theory view. Information & Management, 53(3), 366-379. Segars, A. H., & Grover, V. (1993). Re-examining perceived ease of use and usefulness: A confirmatory factor analysis. MIS quarterly, 517-525. Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. Crown Publishing, New York. United States. Seuring, S., & Muller, M. (2008). Core issues in sustainable supply chain management-a Delphi study. Business Strategy and the Environment, 17(8) Shapiro, B. P., Rangan, V. K., & Sviokla, J. J. (1992). Staple yourself to an order. Harvard business review, 70(4), 113-122. Sharp, A., & McDermott, P. (2009). Workflow modeling: tools for process improvement and application development, Artech House Publishers, Norwood MA. Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 1-8. Simon, H.A. (1977). The new science of management decision (Rev. ed). Prentice-Hall, Englewood Cliffs, N.J. Skulmoski, G., Hartman, F., & Krahn, J. (2007). The Delphi method for graduate research. Journal of Information Technology Education. Research, 6(1), 1-21. References 533 Stanford, N. (2007). Guide to organisation design: creating high-performing and adaptable enterprises (Vol. 10). Profile Books Ltd., London. Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach. Multivariate behavioral research, 25(2), 173-180. Steiger, J. (2009). Exploratory Factor Analysis with R. Retrieved from http://www.statpower.net/Content/312/R%20Stuff/Exploratory%20Factor%20Analysi s%20with%20R.pdf Steiger, J. H. (2013). Validity Retrieved from http://www.statpower.net/Content/312/Handout/Confirmatory%20Factor%20Analysi s%20with%20R.pdf Straub, D., Boudreau, M. C., & Gefen, D. (2004). Validation guidelines for IS positivist research. The Communications of the Association for Information Systems, 13(1), 63. Tabachnick, B. G., & Fidell, L. S. (2007). Experimental designs using ANOVA, Thomson/Brooks/Cole, Belmont, CA. Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th international ed.). Pearson, London, UK. Tallon, P. P. (2008). Inside the adaptive enterprise: an information technology capabilities perspective on business process agility. Information Technology and Management, 9(1), 21-36. Taylor-Powell, E. (2002). Collection Group Data: Delphi Technique. Retrieved https://fyi.uwex.edu/programdevelopment/files/2016/04/Tipsheet4.pdf Tenhiälä, A., & Helkiö, P. (2015). Performance effects of using an ERP system for manufacturing planning and control under dynamic market requirements. Journal of operations management, 36, 147-164. Thompson, B. (1992). Two and one‐half decades of leadership in measurement and evaluation. Journal of Counseling & Development, 70(3), 434-438. Tourangeau, R., Conrad, F. G., & Couper, M. P. (2013). The science of web surveys, Oxford University Press, New York, USA. Tucker, L. R., & Lewis, C. (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38(1), 1-10. Tull, D. S., & Hawkins, D. I., (1990), Qualitative Research, Marketing Research, Measurement, And Method. New York, MacMillan Publishing Co., pp. 391-414. Turoff, M. (1970). The design of a policy Delphi. Technological Forecasting and Social Change, 2(2), 149-171. Uhl, A., & Gollenia, L. A. (2016). Business Transformation Essentials: Case Studies and Articles. Routledge, Taylor & Francis Group, New York. Ullah, A., & Lai, R. (2013). A systematic review of business and information technology alignment. ACM Transactions on Management Information Systems (TMIS), 4(1), 4. References 534 Ulschak, F. L. (1983). Human resource development: The theory and practice of need assessment. Reston Va., Reston Publishing Company. Upton, D. M., & McAfee, A. P. (2000). A path-based approach to information technology in manufacturing. International Journal of Manufacturing Technology and Management, 1(1), 59-78. van Praag, F. (2007). Service Orientated Architecture Retrieved https://www.scribd.com/presentation/12330174/14. Von Bertalanffy, L. (1968). General Systems Theory Foundations, Development, Applications, George Braziller. New York. Van Zolingen, S. J., & Klaassen, C. A. (2003). Selection processes in a Delphi study about key qualifications in senior secondary vocational education. Technological Forecasting and Social Change, 70(4), 317-340. Walsham, G. (1993). Interpreting information systems in organizations. John Wiley & Sons, Inc., New York, USA. Weichhart, G., Molina, A., Chen, D., Whitman, L. E., & Vernadat, F. (2016). Challenges and current developments for Sensing, Smart and Sustainable Enterprise Systems. Computers in Industry, 79. Whittle, R., & Myrick, C. B. (2016). Enterprise business architecture: The formal link between strategy and results. Auerbach Publications, CRC Press, Washington D.C. Wiener, N. (1948). Cybernetics: Control and communication in the animal and the machine. Wiley, New York. Williams, P. L., & Webb, C. (1994). The Delphi technique: a methodological discussion. Journal of advanced nursing, 19(1), 180-186. Witkin, B. R., & Altschuld, J. W. (1995). Planning and conducting needs assessment: a practical guide. Sage Publications, Inc., Thousand Oaks, CA. Woudenberg, F. (1991). An evaluation of Delphi. Technological Forecasting and Social Change, 40(2), 131-150. Wright, R. T., Campbell, D. E., Thatcher, J. B., & Roberts, N. (2012). Operationalizing multidimensional constructs in structural equation modeling: Recommendations for IS research. Communications of the Association for Information Systems, 30(1), 367-412. Yanine, F., Valenzuela, L., Tapia, J., & Cea, J. (2016). Rethinking enterprise flexibility: a new approach based on management control theory. Journal of Enterprise Information Management, 29(6), 860-886. Yip, G., & Johnson, G. (2007). Transforming Strategy. Business Strategy Review, 18(1), 11. Yong, A. G., & Pearce, S. (2013). A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials in Quantitative Methods for Psychology, 9(2), 79- 94. Yousuf, M. I. (2007). Using experts’ opinions through Delphi technique. Practical Assessment, Research & Evaluation, 12(4), 1-8. References 535 Zimmermann, A., Jugel, D., Schmidt, R., Schweda, C. M., & Möhring, M. (2015). Collaborative Decision Support for Adaptive Digital Enterprise Architecture. Paper presented at the BIR Workshops. Retrieved http://ceur-ws.org/Vol-1420/ilogpaper2.pdf.