To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Monte Carlo methods are frequently employed to evaluate the overall characteristics of non-monotonic, non-linear, non-superpositional performance functions. However, the multi-parameter, multi-objective spacecraft separation dynamics model is not amenable to decoupling to produce a result. This paper presents a parametric objective function that can be sampled. It combines the reliability analysis of the complex non-linear spacecraft separation model with Automated Dynamic Analysis of Mechanical Systems (ADAMS) and uses the Monte Carlo method to obtain the separation performance of the spacecraft separation system reliability profile, that is to say, the distribution of separation performance. The performance distribution of the spacecraft separation system was determined and parameters such as spring separation force, spring line of action, module mass and module centre of mass position were found to have a significant effect on the spacecraft separation dynamics by Adaboost machine learning regression.
Based on the long-running Probability Theory course at the Sapienza University of Rome, this book offers a fresh and in-depth approach to probability and statistics, while remaining intuitive and accessible in style. The fundamentals of probability theory are elegantly presented, supported by numerous examples and illustrations, and modern applications are later introduced giving readers an appreciation of current research topics. The text covers distribution functions, statistical inference and data analysis, and more advanced methods including Markov chains and Poisson processes, widely used in dynamical systems and data science research. The concluding section, 'Entropy, Probability and Statistical Mechanics' unites key concepts from the text with the authors' impressive research experience, to provide a clear illustration of these powerful statistical tools in action. Ideal for students and researchers in the quantitative sciences this book provides an authoritative account of probability theory, written by leading researchers in the field.
This chapter covers the quantum algorithmic primitive called Gibbs sampling. Gibbs sampling accomplishes the task of preparing a digital representation of the thermal state, also known as the Gibbs state, of a quantum system in thermal equilibrium. Gibbs sampling is an important ingredient in quantum algorithms to simulate physical systems. We cover multiple approaches to Gibbs sampling, including algorithms that are analogues of classical Markov chain Monte Carlo algorithms.
This chapter surveys some of the many types of models used in science, and some of the many ways scientists use models. Of particular interest for our purposes are the relationships between models and other aspects of scientific inquiry, such as data, experiments, and theories. Our discussion shows important ways in which modeling can be thought of as a distinct and autonomous scientific activity, but always models can be crucial for making use of data and theories and for performing experiments. The growing reliance on simulation models has raised new and important questions about the kind of knowledge gained by simulations and the relationship between simulation and experimentation. Is it important to distinguish between simulation and experimentation, and if so, why?
The Vale–Maurelli (VM) approach to generating non-normal multivariate data involves the use of Fleishman polynomials applied to an underlying Gaussian random vector. This method has been extensively used in Monte Carlo studies during the last three decades to investigate the finite-sample performance of estimators under non-Gaussian conditions. The validity of conclusions drawn from these studies clearly depends on the range of distributions obtainable with the VM method. We deduce the distribution and the copula for a vector generated by a generalized VM transformation, and show that it is fundamentally linked to the underlying Gaussian distribution and copula. In the process we derive the distribution of the Fleishman polynomial in full generality. While data generated with the VM approach appears to be highly non-normal, its truly multivariate properties are close to the Gaussian case. A Monte Carlo study illustrates that generating data with a different copula than that implied by the VM approach severely weakens the performance of normal-theory based ML estimates.
Unless data are missing completely at random (MCAR), proper methodology is crucial for the analysis of incomplete data. Consequently, methods for effectively testing the MCAR mechanism become important, and procedures were developed via testing the homogeneity of means and variances–covariances across the observed patterns (e.g., Kim & Bentler in Psychometrika 67:609–624, 2002; Little in J Am Stat Assoc 83:1198–1202, 1988). The current article shows that the population counterparts of the sample means and covariances of a given pattern of the observed data depend on the underlying structure that generates the data, and the normal-distribution-based maximum likelihood estimates for different patterns of the observed sample can converge to the same values even when data are missing at random or missing not at random, although the values may not equal those of the underlying population distribution. The results imply that statistics developed for testing the homogeneity of means and covariances cannot be safely used for testing the MCAR mechanism even when the population distribution is multivariate normal.
The paper clarifies the relationship among several information matrices for the maximum likelihood estimates (MLEs) of item parameters. It shows that the process of calculating the observed information matrix also generates a related matrix that is the middle piece of a sandwich-type covariance matrix. Monte Carlo results indicate that standard errors (SEs) based on the observed information matrix are robust to many, but not all, conditions of model/distribution misspecifications. SEs based on the sandwich-type covariance matrix perform most consistently across conditions. Results also suggest that SEs based on other matrices are either not consistent or perform not as robust as those based on the sandwich-type covariance matrix or the observed information matrix.
Efron's Monte Carlo bootstrap algorithm is shown to cause degeneracies in Pearson's r for sufficiently small samples. Two ways of preventing this problem when programming the bootstrap of r are considered.
We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n×1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a full-rank predictor correlation matrix, Rxx, of order n, and for regression models with constant R2 (coefficient of determination), the OLS weight vectors for all possible criteria terminate on the surface of an n-dimensional ellipsoid. The population performance of alternate regression weights—such as equal weights, correlation weights, or rounded weights—can be modeled as a function of the Cartesian coordinates of the ellipsoid. These geometrical notions can be easily extended to assess the sampling performance of alternate regression weights in models with either fixed or random predictors and for models with any value of R2. To illustrate these ideas, we describe algorithms and R (R Development Core Team, 2009) code for: (1) generating points that are uniformly distributed on the surface of an n-dimensional ellipsoid, (2) populating the set of regression (weight) vectors that define an elliptical arc in ℝn, and (3) populating the set of regression vectors that have constant cosine with a target vector in ℝn. Each algorithm is illustrated with real data. The examples demonstrate the usefulness of studying all possible criteria when evaluating alternate regression weights in regression models with a fixed set of predictors.
In the framework of a robustness study on maximum likelihood estimation with LISREL three types of problems are dealt with: nonconvergence, improper solutions, and choice of starting values. The purpose of the paper is to illustrate why and to what extent these problems are of importance for users of LISREL. The ways in which these issues may affect the design and conclusions of robustness research is also discussed.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.
An examination is made concerning the utility and design of studies comparing nonmetric scaling algorithms and their initial configurations, as well as the agreement between the results of such studies. Various practical details of nonmetric scaling are also considered.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
Multidimensional scaling has recently been enhanced so that data defined at only the nominal level of measurement can be analyzed. The efficacy of ALSCAL, an individual differences multidimensional scaling program which can analyze data defined at the nominal, ordinal, interval and ratio levels of measurement, is the subject of this paper. A Monte Carlo study is presented which indicates that (a) if we know the correct level of measurement then ALSCAL can be used to recover the metric information presumed to underlie the data; and that (b) if we do not know the correct level of measurement then ALSCAL can be used to determine the correct level and to recover the underlying metric structure. This study also indicates, however, that with nominal data ALSCAL is quite likely to obtain solutions which are not globally optimal, and that in these cases the recovery of metric structure is quite poor. A second study is presented which isolates the potential cause of these problems and forms the basis for a suggested modification of the ALSCAL algorithm which should reduce the frequency of locally optimal solutions.
Asymptotic distributions of the estimators of communalities are derived for the maximum likelihood method in factor analysis. It is shown that the common practice of equating the asymptotic standard error of the communality estimate to the unique variance estimate is correct for standardized communality but not correct for unstandardized communality. In a Monte Carlo simulation the accuracy of the normal approximation to the distributions of the estimators are assessed when the sample size is 150 or 300.
We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump Lévy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance, and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving Lévy process, including the generalised hyperbolic, normal-gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure Lévy process, and to a Brownian-driven SDE in the case of the Lévy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for Lévy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo, expectation-maximisation, and sequential Monte Carlo.
The miniaturized conical cones for stereotactic radiosurgery (SRS) make it challenging in measurement of dosimetric data needed for commissioning of treatment planning system. This study aims at validating dosimetric characteristics of conical cone collimator manufactured by Varian using Monte Carlo (MC) simulation technique.
Methods & Material:
Percentage depth dose (PDD), tissue maximum ratio (TMR), lateral dose profile (LDP) and output factor (OF) were measured for cones with diameters of 5mm, 7·5mm, 10mm, 12·5 mm, 15 mm and 17·5 mm using EDGE detector for 6MV flattening filter-free (FFF) beam from Truebeam linac. Similarly, MC modelling of linac for 6MVFFF beam and simulation of conical cones were performed in PRIMO. Subsequently, measured beam data were validated by comparing them with results obtained from MC simulation.
Results:
The measured and MC-simulated PDDs or TMRs showed close agreement within 3% except for cone of 5mm diameter. Deviations between measured and simulated PDDs or TMRs were substantially higher for 5mm cone. The maximum deviations at depth of 10cm, 20cm and at range of 50% dose were found 4·05%, 7·52%, 5·52% for PDD and 4·04%, 7·03%, 5·23% for TMR with 5mm cone, respectively. The measured LDPs acquired for all the cones showed close agreement with MC LDPs except in penumbra region around 80% and 20% dose profile. Measured and MC full-width half maxima of dose profiles agreed with nominal cone size within ± 0·2 mm. Measured and MC OFs showed excellent agreement for cone sizes ≥10 mm. However, deviation consistently increases as the size of the cone gets smaller.
Findings:
MC model of conical cones for SRS has been presented and validated. Very good agreement was found between experimentally measured and MC-simulated data. The dosimetry dataset obtained in this study validated using MC model may be used to benchmark beam data measured for commissioning of SRS for cone planning.
This paper presents a set of theoretical models that links a two-phase sequence of cooperative political integration and conflict to explore the reciprocal relationship between war and state formation. It compares equilibria rates of state formation and conflict using a Monte Carlo that generates comparative statics by altering the systemic distribution of ideology, population, tax rates, and war costs across polities. This approach supports three core findings. First, war-induced political integration is at least 2.5 times as likely to occur as integration to realize economic gains. Second, we identify mechanisms linking endogenous organizations to the likelihood of conflict in the system. For example, a greater domestic willingness to support public goods production facilitates the creation of buffer states that reduce the likelihood of a unique class of trilateral wars. These results suggest that the development of the modern administrative state has helped to foster peace. Third, we explore how modelling assumptions setting the number of actors in a strategic context can shape conclusions about war and state formation. We find that dyadic modelling restrictions tend to underestimate the likelihood of cooperative political integration and overestimate the likelihood of war relative to a triadic modelling context.
This chapter elaborates on the calibration and validation procedures for the model. First, we describe our calibration strategy in which a customised optimisation algorithm makes use of a multi-objective function, preventing the loss of indicator-specific error information. Second, we externally validate our model by replicating two well-known statistical patterns: (1) the skewed distribution of budgetary changes and (2) the negative relationship between development and corruption. Third, we internally validate the model by showing that public servants who receive more positive spillovers tend to be less efficient. Fourth, we analyse the statistical behaviour of the model through different tests: validity of synthetic counterfactuals, parameter recovery, overfitting, and time equivalence. Finally, we make a brief reference to the literature on estimating SDG networks.
We report a combined experimental and theoretical study of uranyl complexes that form on the interlayer siloxane surfaces of montmorillonite. We also consider the effect of isomorphic substitution on surface complexation since our montmorillonite sample contains charge sites in both the octahedral and tetrahedral sheets. Results are given for the two-layer hydrate with a layer spacing of 14.58 Å. Polarized-dependent X-ray absorption fine structure spectra are nearly invariant with the incident angle, indicating that the uranyl ions are oriented neither perpendicular nor parallel to the basal plane of montmorillonite. The equilibrated geometry from Monte Carlo simulations suggests that uranyl ions form outer-sphere surface complexes with the [O=U=O]2+ axis tilted at an angle of ~45° to the surface normal.