Measurement Science for Complex Information Systems
Posted by whatthingnews
Summary:
Description:
What are complex systems? Large collections of interconnected components whose interactions lead to macroscopic behaviors in:
What are the technical objectives? Establish models
and analysis methods that (1) are computationally tractable, (2) reveal
macroscopic behavior and (3) establish causality. Characterize
distributed control techniques, including: (1) economic mechanisms to
elicit desired behaviors and (2) biological mechanisms to organize
components.
Why is this hard? Valid computationally tractable models that exhibit macroscopic behavior and reveal causality are difficult to devise. Phase-transitions are difficult to predict and control.
Who would care? All designers and users of networks and distributed systems with a 25-year history of unexpected failures:
Model scale – Systems of interest (e.g., Internet and compute grids) extend over large spatiotemporal extent, have global reach, consist of millions of components, and interact through many adaptive mechanisms over various timescales. Scale-reduction techniques must be employed. Which computational models can achieve sufficient spatiotemporal scaling properties? Micro-scale models are not computable at large spatiotemporal scale. Macro-scale models are computable and might exhibit global behavior, but can they reveal causality? Meso-scale models might exhibit global behavior and reveal causality, but are they computable? One plausible approach is to investigate abstract models from the physical sciences. e.g., fluid flows (from hydrodynamics), lattice automata (from gas chemistry), Boolean networks (from biology) and agent automata (from geography). We can apply parallel computing to scale to millions of components and days of simulated time. Scale reduction may also be achieved by adopting n-level experiments coupled for orthogonal fractional factorial (OFF) experiment designs.
Model validation – Scalable models from the physical sciences (e.g., differential equations, cellular automata, nk-Boolean nets) tend to be highly abstract. Can sufficient fidelity be obtained to convince domain experts of the value of insights gained from such abstract models? We can conduct sensitivity analyses to ensure the model exhibits relationships that match known relationships from other accepted models and empirical measurements. Sensitivity analysis also enables us to understand relationships between model parameters and responses. We can also conduct key comparisons along three complementary paths: (1) comparing model data against existing traffic and analysis, (2) comparing results from subsets of macro/meso-scale models against micro-scale models and (3) comparing simulations of distributed control regimes against results from implementations in test facilities, such as the Global Environment for Network Innovations.
Tractable analysis – The scale of potential measurement data is expected to be very large – O(10**15) – with millions of elements, tens of variables, and millions of seconds of simulated time. How can measurement data be analyzed tractably? We could use homogeneous models, which allow one (or a few) elements to be sampled as representative of all. This reduces data volume to 10**6 – 10**7, which is amenable to statistical analyses (e.g., power-spectral density, wavelets, entropy, Kolmogorov complexity) and to visualization. Where homogeneous models are inappropriate, we can use clustering analysis to view relationships among groups of responses. We can also exploit correlation analysis and principal components analysis to identify and exclude redundant responses from collected data. Finally, we can construct combinations of statistical tests and multidimensional data visualization techniques tailored to specific experiments and data of interest.
Causal analysis – Tractable analysis strategies yield coarse data with limited granularity of timescales, variables and spatial extents. Coarseness may reveal macroscopic behavior that is not explainable from the data. For example, an unexpected collapse in the probability density function of job completion times in a computing grid was unexplainable without more detailed data and analysis. Multidimensional analysis can represent system state as a multidimensional space and depict system dynamics through various projections (e.g., slicing, aggregation, scaling). State-space dynamics can segment system dynamics into an attractor-basin field and then monitor trajectories. Markov models providing compact, computationally efficient representations of system behavior can be subjected to perturbation analyses to identify potential failure modes and their causes.
Controlling Behavior – Large distributed systems and networks cannot be subjected to centralized control regimes because the system consists of too many elements, too many parameters, too much change, and too many policies. Can models and analysis methods be used to determine how well decentralized control regimes stimulate desirable system-wide behaviors? Use price feedback (e.g., auctions, present-value analysis or commodity markets) to modulate supply and demand for resources or services. Use biological processes to differentiate function based on environmental feedback, e.g., morphogen gradients, chemotaxis, local and lateral inhibition, polarity inversion, quorum sensing, energy exchange and reinforcement.
- Biological systems (e.g., slime molds, ant colonies, embryos)
- Physical systems (e.g., earthquakes, avalanches, forest fires)
- Social systems (e.g., transportation networks, cities, economies)
- Information systems (e.g., Internet and compute clouds)
“[Despite] society’s profound dependence on networks, fundamental knowledge about them is primitive. [G]lobal
communication … networks have quite advanced technological
implementations but their behavior under stress still cannot be
predicted reliably.… There is no science today that offers the
fundamental knowledge necessary to design large complex networks [so] that their behaviors can be predicted prior to building them.”
above quote from Network Science 2006, a National Research Council report
What is the new idea?
Leverage models and mathematics from the physical sciences to define a
systematic method to measure, understand, predict and control
macroscopic behavior in the Internet and distributed software systems
built on the Internet.
Why is this hard? Valid computationally tractable models that exhibit macroscopic behavior and reveal causality are difficult to devise. Phase-transitions are difficult to predict and control.
Who would care? All designers and users of networks and distributed systems with a 25-year history of unexpected failures:
- ARPAnet congestion collapse of 1980
- Internet congestion collapse of Oct 1986
- Cascading failure of AT&T long-distance network in Jan 1990
- Collapse of AT&T frame-relay network in April 1998 …
- “Cost of eBay's 22-Hour Outage Put At $2 Million”, Ecommerce, Jun 1999
- “Last Week’s Internet Outages Cost $1.2 Billion”, Dave Murphy, Yankee Group, Feb 2000
- “…the Internet "basically collapsed" Monday”, Samuel Kessler, Symantec, Oct 2003
- “Network crashes…cost medium-sized businesses a full 1% of annual revenues”, Technology News, Mar 2006
- “costs to the U.S. economy…range…from $65.6 M for a 10-day [Internet] outage at an automobile parts plant to $404.76 M for … failure …at an oil refinery”, Dartmouth study, Jun 2006
- DoD to spend $13 B over the next 5 yrs on Net-Centric Enterprise Services initiative, Government Computer News, 2005
- Market derived from Web services to reach $34 billion by 2010, IDC
- Grid computing market to exceed $12 billion in revenue by 2007, IDC
- Market for wireless sensor networks to reach $5.3 billion in 2010, ONWorld
- Revenue in mobile networks market will grow to $28 billion in 2011, Global Information, Inc.
- Market for service robots to reach $24 billion by 2010, International Federation of Robotics
Hard Issues & Plausible Approaches
| Hard Issues | Plausible Approaches |
| H1. Model scale | A1. Scale-reduction techniques |
| H2. Model validation | A2. Sensitivity analysis & key comparisons |
| H3. Tractable analysis | A3. Cluster analysis and statistical analyses |
| H4. Causal analysis | A4. Evaluate analysis techniques |
| H5. Controlling behavior | A5. Evaluate distributed control regimes |
Model scale – Systems of interest (e.g., Internet and compute grids) extend over large spatiotemporal extent, have global reach, consist of millions of components, and interact through many adaptive mechanisms over various timescales. Scale-reduction techniques must be employed. Which computational models can achieve sufficient spatiotemporal scaling properties? Micro-scale models are not computable at large spatiotemporal scale. Macro-scale models are computable and might exhibit global behavior, but can they reveal causality? Meso-scale models might exhibit global behavior and reveal causality, but are they computable? One plausible approach is to investigate abstract models from the physical sciences. e.g., fluid flows (from hydrodynamics), lattice automata (from gas chemistry), Boolean networks (from biology) and agent automata (from geography). We can apply parallel computing to scale to millions of components and days of simulated time. Scale reduction may also be achieved by adopting n-level experiments coupled for orthogonal fractional factorial (OFF) experiment designs.
Model validation – Scalable models from the physical sciences (e.g., differential equations, cellular automata, nk-Boolean nets) tend to be highly abstract. Can sufficient fidelity be obtained to convince domain experts of the value of insights gained from such abstract models? We can conduct sensitivity analyses to ensure the model exhibits relationships that match known relationships from other accepted models and empirical measurements. Sensitivity analysis also enables us to understand relationships between model parameters and responses. We can also conduct key comparisons along three complementary paths: (1) comparing model data against existing traffic and analysis, (2) comparing results from subsets of macro/meso-scale models against micro-scale models and (3) comparing simulations of distributed control regimes against results from implementations in test facilities, such as the Global Environment for Network Innovations.
Tractable analysis – The scale of potential measurement data is expected to be very large – O(10**15) – with millions of elements, tens of variables, and millions of seconds of simulated time. How can measurement data be analyzed tractably? We could use homogeneous models, which allow one (or a few) elements to be sampled as representative of all. This reduces data volume to 10**6 – 10**7, which is amenable to statistical analyses (e.g., power-spectral density, wavelets, entropy, Kolmogorov complexity) and to visualization. Where homogeneous models are inappropriate, we can use clustering analysis to view relationships among groups of responses. We can also exploit correlation analysis and principal components analysis to identify and exclude redundant responses from collected data. Finally, we can construct combinations of statistical tests and multidimensional data visualization techniques tailored to specific experiments and data of interest.
Causal analysis – Tractable analysis strategies yield coarse data with limited granularity of timescales, variables and spatial extents. Coarseness may reveal macroscopic behavior that is not explainable from the data. For example, an unexpected collapse in the probability density function of job completion times in a computing grid was unexplainable without more detailed data and analysis. Multidimensional analysis can represent system state as a multidimensional space and depict system dynamics through various projections (e.g., slicing, aggregation, scaling). State-space dynamics can segment system dynamics into an attractor-basin field and then monitor trajectories. Markov models providing compact, computationally efficient representations of system behavior can be subjected to perturbation analyses to identify potential failure modes and their causes.
Controlling Behavior – Large distributed systems and networks cannot be subjected to centralized control regimes because the system consists of too many elements, too many parameters, too much change, and too many policies. Can models and analysis methods be used to determine how well decentralized control regimes stimulate desirable system-wide behaviors? Use price feedback (e.g., auctions, present-value analysis or commodity markets) to modulate supply and demand for resources or services. Use biological processes to differentiate function based on environmental feedback, e.g., morphogen gradients, chemotaxis, local and lateral inhibition, polarity inversion, quorum sensing, energy exchange and reinforcement.
Additional Technical Details:
Related Presentations
- K. Mills, "The Influence of Realism on Congestion in Network Simulations", C4I Seminar, George Mason University, Fairfax, VA, February 1, 2016.
- K. Mills, "The Influence of Realism on Congestion in Network Simulations", Applied & Computational Mathematics Seminar Series, NIST, Gaithersburg, MD, December 1, 2015.
- K. Mills, "Validating Simulations of Large-scale Computer Networks", ITL Science Day, Gaithersburg, MD, October 27, 2015.
- C. Dabrowski and K. Mills, "The influence of realism on congestion in network simulations", poster presented at ITL Science Day, Gaithersburg, MD, October 27, 2015.
- K. Mills, "Predicting Global Failure Regimes in Complex Information Systems", NIST Cloud Computing Forum and Workshop 8, Gaithersburg, MD, July 9, 2015.
- J. Xie, Y. Wan, Y. Zhou, K. Mills, J. Filliben, and Y. Lei, "Effective and Scalable Uncertainty Evaluation for Large-Scale Complex System Applications", Winter Simulation Conference, Savannah, GA, December 10, 2014.
- K. Mills, C. Dabrowski, J. Filliben, F. Hunt, and B. Rust, "Early Warning of Network Catastrophes", FY2015 IMS Oral Presentation, Gaithersburg, MD, August 28, 2014.
- K. Mills, M. Mijic and S. Morgan, "Cloud Reliability Breakout", NIST Workshop and Forum on Cloud Computing and Mobility, Gaithersburg, MD, March 25-27, 2014.
- K. Mills, "Combining Genetic Algorithms & Simulation to Search for Failure Scenarios in System Models", Computer Science Interdisciplinary Seminar, George Mason University, February 19, 2014.
- K. Mills, "Predicting the Unpredictable in Complex Information Systems", Joint University of Maryland and NIST Network Science Symposium, College Park, Maryland, January 24, 2014.
- K. Mills, "Combing Genetic Algorithms & Simulation to Search for Failure Scenarios in System Models", SIMUL 2013 paper presentation, Venice, Italy, October 29, 2013.
- K. Mills, "Predicting the Unpredictable in Complex Information Systems", SIMUL 2013 Keynote, Venice Italy, October 28, 2013.
- K. Mills, "Combining Genetic Algorithms & Simulation to Search for Failure Scenarios in System Models", Mitre Cyber Security Technical Center Distinguished Lecture, McLean, Virginia, October 16, 2013.
- K. Mills, "Combining Genetic Algorithms & Simulation to Search for Failure Scenarios in System Models", presentation for the NIST Cloud Computing Security Working Group, July 17, 2013.
- K. Mills, "Understanding Behavior and Improving Reliability in Complex Information Systems", keynote presentation at the 4th PI meeeting for the DARPA Mission-Resilient Cloud program, Park Ridge, NJ, May 8-10, 2013.
- K. Mills, J. Filliben and C. Dabrowski, "Using Genetic Algorithms to Search for Failure Scenarios", poster presentation at Cloud Computing & Big Data Forum & Workshop, NIST, January 15-17, 2013.
- Mills, "Predicting the Unpredictable in Complex Information Systems", keynote presentation at the IEEE/ACM 5th International Conference on Utility & Cloud Computing, Chicago, Illinois, November 5-8, 2012.
- K. Mills, J. Filliben and C. Dabrowski, "Predicting Global Failure Regimes in Complex Information Systems", presentation at the DoE COMBINE Worskhop, Washington, DC, September 11-12, 2012.
- C. Dabrowski, J. Filliben, K. Mills, S. Ressler and B. Rust, "Mitigating Global Failure Regimes in Large Distributed Systems", poster presented at the Lawrence Livermore Workshop on Current Challenges in Computing 2012: Network Science, Napa, CA, August 28-29, 2012.
- A. Haines, "Determining Important Control Parameters of a Genetic Algorithm", Summery University Research Fellow Plenary Presentation, NIST, Gaithersburg, MD, August 7, 2012.
- C. Dabrowski, J. Filliben and K. Mills, "Predicting Global Failure Regimes in Complex Information Networks", Santa Fe Institute Workshop on Measurement of Complex Information Networks, Mitre, McLean, Virginia, July 12, 2012.
- C. Dabrowski, J. Filliben and K. Mills, "Predicting Global Failure Regimes in Complex Information Systems", NetONets 2012, Systemic Risk and Infrastructural Interdependencies, Northwestern University, June 19, 2012.
- K. Mills, J. Filliben and C. Dabrowski, "Improving Cloud Reliability", NIST Cloud Computing Forum & Workshop V, Department of Commerce, Washington, D.C., June 5-7, 2012.
- C. Dabrowski, J. Filliben, K. Mills, S. Ressler and B. Rust, "Poster on Mitigating Global Failure Regimes in Large Distributed Systems", presented at the NIST Cloud Computing Forum & Workshop V, Department of Commerce, Washington, D.C., June 5-7, 2012.
- K. Mills, J. Filliben and C. Dabrowski, "Comparing VM-Placement Algorithms for On-Demand Clouds", Large-Scale Networking Working Group, Arlington, VA, Feb. 14, 2012.
- C. Dabrowski and K. Mills, "VM Leakage & Orphan Control in Open-Source Clouds", IEEE CloudCom 2011, Athens, Dec. 1, 2011.
- K. Mills, J. Filliben and C. Dabrowski, "Comparing VM-Placement Algorithms for On-Demand Clouds", IEEE CloudCom 2011, Athens, Nov. 30, 2011.
- K. Mills, C. Dabrowski, J. Filliben and F. Hunt, "Posters Presented at NIST Cloud Computing Forum & Workshop IV", Gaithersburg, MD, Nov. 3-4, 2011.
- J. Filliben and K. Mills, "Comparison of Two Dimension-Reduction Methods for Network Simulation Models", Statistical Engineering Division Seminar, NIST, Gaithersburg, MD, Sept. 22, 2011.
- C. Dabrowski and F. Hunt, "Using Markov Chain and Graph Theory Concepts to Analyze Behavior in Complex Distributed Systems", 23rd European Modeling and Simulation Symposium, Rome, Sept. 13, 2011.
- K. Mills, J. Filliben, D.-Y. Cho and E. Schwartz, "Predicting Macroscopic Dynamics in Large Distributed Systems", American Society of Mechanical Engineers 2011 Conference on Pressure Vessels & Piping, Baltimore, MD, July 21, 2011.
- C Dabrowski and F. Hunt, "Identifying Failure Scenarios in Complex Systems by Perturbing Markov Chain Models", American Society of Mechanical Engineers 2011 Conference on Pressure Vessels & Piping, Baltimore, MD, July 21, 2011.
- K. Mills, J. Filliben and C. Dabrowski, "An Efficient Sensitivity Analysis Method for Large Cloud Simulations", IEEE Cloud 2011, Washington, D.C., July 8, 2011.
- K. Mills, J. Filliben, D.-Y. Cho and E. Schwartz, "Predicting Macroscopic Dynamics in Large Distributed Systems", LSN Seminar on Complex Networks and Information Systems, Gaithersburg, Maryland, June 30, 2011.
- K. Mills, J. Filliben, C. Dabrowski and S. Ressler, "Posters Presented NIST Work on Measurement Science for Complex Systems, as Applied to Cloud Computing Systems", NIST Cloud Computing Forum & Workshop III, Gaithersburg, Maryland, April 7-8, 2011.
- K. Mills, E. Schwartz and J. Yuan, "How to Model a TCP/IP Network using only 20 Parameters", Winter Simulation Conference (WSC 2010), Baltimore, Maryland, Dec. 8, 2010.
- K. Mills and J. Filliben, "Using Sensitivity Analysis to Identify Significant Parameters in a Network Simulation", Winter Simulation Conference (WSC 2010), Baltimore, Maryland, Dec. 6, 2010.
- K. Mills and J. Filliben, "Comparing Two Dimension-Reduction Methods for Network Simulation Models", Winter Simulation Conference (WSC 2010), Baltimore, Maryland, Dec. 6, 2010.
- K. Mills, "Study of Proposed Internet Congestion Control Algorithms", Internet Congestion Control Research Group (ICCRG) of the Internet Research Task Force (IRTF) at the 77th Internet Engineering Task Force (IETF) meeting at Anaheim, California, March 24, 2010.
- K. Mills and J. Filliben, "An Efficient Sensitivity Analysis Method for Mesoscopic Network Models", Complex Systems Study Group, NIST, February 2, 2010
- K. Mills, "Study of Proposed Internet Congestion Control Algorithms", seminar sponsored by the Computer Science Department and the C4I Center at George Mason University, Fairfax, Virginia, January 29, 2010.
- K. Mills, "How to model a TCP/IP network using on 20 parameters", Complex Systems Study Group, NIST, November 17, 2009
- K. Mills, "Measurement Science for Complex Information Systems", invited presentation to the Internet Congestion-Control Research Group (ICC-RG) of the Internet Research Task Force (IRTF) at Tokyo, Japan, May 20, 2009.
- K. Mills, "Measurement Science for Complex Information Systems", seminar sponsored by the Computer Science Department and the C4I Center at George Mason University, Fairfax, Virginia, March 27, 2009.
- K. Mills, "Measurement Science for Complex Information Systems", AOL Network Architecture Group, Dulles, Virginia, March 18, 2009.
- K. Mills, "Measurement Science for Complex Information Systems", NITRD Large-Scale Networking Working Group, Ballston, Virginia, March 10, 2009.
- K. Mills, "Progress Report on Measurement Science for Complex Information Systems", Complex Systems Lecture Series, NIST Information Technology Laboratory, Gaithersburg, Maryland, January 27, 2009.
- J. Filliben, "Sensitivity Analysis Methodology for a Complex System Computational Model", 39th Symposium on the Interface: Computing Science and Statistics, Philadelphia, PA, May 26, 2007.
- C. Dabrowski and K. Mills, "A Program of Work for Understanding Emergent Behavior in Global Grid Systems", Global Grid Forum 16, Athens, Greece, February 13, 2006.
Major Accomplishments:
Mar 2015 The project demonstrated that results from a previous study of virtual-machine placement algorithms in computational clouds would not be changed by the injection of asymmetries, dynamics, and failures. This demonstration increased confidence in findings from the previous study.
Dec 2014 The project delivered an effective and scalable method for uncertainty estimation in large-scale simulation models. The method, described in a paper in the proceedings of the 2014 Winter Simulation Conference, can be applied to provide accurate estimation of the value of model responses. The estimation algorithm requires a minimum of computation.
Sep 2014 The project delivered an experiment design and analysis method to determine effective settings for control parameters in evolutionary computation algorithms. The method was documented in a journal article accepted for publication by Evolutionary Computation, MIT Press, which is the leading journal in the field.
Aug 2014 The project delivered a proposal and oral presentation outlining research into methods to provide early warning of network catastrophes. The proposal and oral presentation were part of the FY 2015 NIST competition seeking innovations in measurement science.
Oct 2013 The project delivered an evaluation of a method combining genetic algorithms and simulation to search for failure scenarios in system models. The method was applied to a case study of the Koala cloud computing model. The method was able to discover a known failure cause, but in a novel setting, and was also able to discover several unknown failure scenarios. Subsequently, the method and evaluation were presented at an international workshop on simulation methods, and in two invited lectures, one at Mitre and one at George Mason University.
Dec 2012 In the fall of 2012, Dr. Mills contributed methods from this project to a DoE Office of Science Workshop on Computational Modeling of Big Networks (COMBINE). Dr. Mills also coauthored the report, which was published in December of 2012. The main NIST contributions are documented in Chapter 5 of the report, which outlines effective methods and best practices for experiment design and validation & verification of simulation models.
Nov 2011 In the fall of 2009, this project started investigating large scale behavior in Infrastructure Clouds. The project produced three related papers during 2011, and all three papers were accepted at the two major IEEE cloud computing conferences held during the year. The rapid success of the project in this new domain illustrates the general applicability of the methods we developed, as well as the ease with which those methods can be applied.
Nov 2010 Developed and demonstrated Koala, a discrete-event simulator for Infrastructure Clouds. Completed a sensitivity analysis of Koala to identify unique response dimensions and significant factors driving model behavior. Created multidimensional animations to visualize spatiotemporal variation in resource usage and load for cores, disks, memory and network interfaces in clouds with up to O(10**5) nodes.
May 2010 NIST Special Publication 500-282: Study of Proposed Internet Congestion Control Mechanisms
Sep 2009 Draft NIST Special Publication: Study of Proposed Internet Congestion-Control Mechanisms
Apr 2009 Demonstrated applicability of Markov model perturbation analysis to communication networks.
Sep 2008 Developed a Markov model for a global, computational grid and demonstrated the feasibility of applying perturbation analysis to predict conditions that could lead to performance degradation. Currently, perturbation analysis is a theoretical topic for which we show applications to large distributed systems.
Aug 2008 Developed and demonstrated multidimensional visualization software to explore relationships among complex data sets derived from simulations of large distributed systems. Currently, there are no widely used visualization techniques to explore multidimensional data from simulations of large distributed systems.
Jun 2008 Developed and demonstrated an analytical framework to understand relationships among pricing, admission control and scheduling for resource allocation in computing clusters. Currently, resource-allocation mechanisms for computing clusters rely on heuristics.
Apr 2008 Developed and validated MesoNetHS, which adds six proposed replacement congestion-control algorithms to MesoNet and allows the behavior of the algorithms to be investigated in a large topology. Currently, these congestion-control algorithms are explored in simulated and empirical topologies of small size.
Sep 2007 Developed and demonstrated a methodology for sensitivity analysis of models of large distributed systems. Currently, sensitivity analysis of models for large distributed systems is considered infeasible.
Apr 2007 Developed and verified MesoNet, a mesoscopic scale network simulation model that can be specified with about 20 parameters. Currently, specifying most network simulations requires hundreds to thousands of parameters.
Last Updated Date: 02/23/2016

Post a Comment