To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter draws on the original cross-national dataset of counterrevolutions to examine global patterns and historical trends in counterrevolutionary emergence and success. It begins with a series of statistical analyses that support core elements of the theory. Counterrevolutions are much less likely to topple radical-violent revolutions than moderate-unarmed ones – a finding that holds across two different measures of these types. Subsequent analyses shed light on the mechanisms behind this relationship: loyal armies and powerful foreign sponsors are key to defeating counterrevolution, whereas robust parties matter less. Next, the chapter shows that counterrevolutions are most likely to emerge following revolutions with medium levels of violence, which leave the old regime with both the capacity and interest to launch a challenge. Further, there is little support for four alternative explanations, particularly when it comes to counterrevolutionary success. Next, the chapter evaluates how key events during the post-revolutionary transition (like land reforms and elections) affect the likelihood of counterrevolution. It concludes with an exploration of the decline in counterrevolution since 1900 (followed by an uptick in the last decade), which it traces to a combination of the changing nature of revolution and shifts in the distribution of global power.
This chapter analyzes the popular dimensions of Egypt’s 2013 counterrevolution, using an original dataset of protests during the post-revolutionary transition. It shows that Egypt’s revolutionaries were unable to consolidate the social support of the revolution, and that this failure allowed counterrevolutionaries to channel broad disaffections with revolutionary rule into a popular movement for restoration. The dataset covers the final eighteen months of the transition and includes approximately 7,500 contentious events sourced from the major Arabic-language newspaper Al-Masry Al-Youm. These data reveal, first, the extent to which social mobilization persisted after the end of the eighteen-day uprising. The transition period was awash with discontent and unrest, much of it over nonpolitical issues like the deterioration of the economy, infrastructure problems, and unmet labor demands. Second, statistical analyses show that this discontent came to be directed against Mohamed Morsi’s government. The earliest and most persistent anti-Morsi protests emerged in places where the population had long been highly mobilized over socio-economic grievances. Later, they also began to emerge in places with large numbers of old regime supporters. Ultimately, these two groups – discontented Egyptians and committed counterrevolutionaries – came together to provide the social base for the movement that swept the military back to power.
This appendix provides a comprehensive overview of statistical network models, building from fundamental concepts to advanced frameworks. It begins with essential mathematical background and probability theory, then introduces the foundations of random network models. The appendix covers a range of models, including Erdös-Rényi, stochastic block models (both a priori and a posteriori), random dot product graphs, and their generalizations. Each model is presented with its parameters, generative process, probability calculations, and equivalence classes. The appendix also explores degree-corrected variants and the Inhomogeneous Erdös-Rényi model. Throughout, we emphasize the relationships between models and their increasing complexity, providing a solid theoretical foundation for understanding network structures and dynamics.
This chapter lays the foundation of probability theory, which has a central role in statistical mechanics. It starts the exposition with Kolmogorov’s axioms of probability theory and develops the vocabulary through example cases. Some time is spent on sigma algebras and the role they play in probability theory, and more specifically to properly define random variables on the reals. In particular, the popular notion that ‘the probability for a real variable to take on a single value’ is critically analysed and contextualised. Indeed, there are situations in statistical mechanics where some mechanical variables on the reals do get a non-zero probability to take on a single value. Moments and cumulants are introduced, as well as the method of generating functions, which prepare the ground as efficacious tools for statistical mechanics. Finally, Jaynes’s least-biased distribution principle is introduced in order to obtain a priori probabilities given some constraints imposed on the system.
Edited by
Rebecca Leslie, Royal United Hospitals NHS Foundation Trust, Bath,Emily Johnson, Worcester Acute Hospitals NHS Trust, Worcester,Alex Goodwin, Royal United Hospitals NHS Foundation Trust, Bath,Samuel Nava, Severn Deanery, Bristol
This chapter presents material on statistics, from the basic principles of the classification of data to normal distribution. We go on to discuss types of the null hypothesis, types of error and different methods of statistical analysis depending on the type of data set presented.
Now in its second edition, this is an invaluable manual for teaching and learning variation analysis, the quantitative study of linguistic variation and change. Written by a leading scholar in the field with over thirty years of experience, it provides an insider's view of the methodology through practical, 'hands-on' advice, including straightforward instructions for conducting analyses using the R programming language, the new gold standard for analysis. It leads readers through each phase of a research study based on data gathered in sociocultural contexts, beginning with the selection and sampling of a data source, to hints on successful project design, interview techniques, data management, analysis and interpretation, with systematic procedures provided at each step of the process. This edition has been fully updated, with new insights and explanations in line with recent discoveries in the field, making it essential reading for anyone embarking on their own sociolinguistic research project.
Starting with George Lukacs’ complaint that naturalist works are incoherent because of their superfluous detail, this chapter argues that incoherence was in fact an effective literary aesthetic for seeing and managing details at a large scale. The large scale that was particularly salient to American naturalism was the global one of American imperial expansion during the Progressive Era, and the essay argues that American empire is a useful framework for understanding naturalism as a literary movement because it brought together an investment in incoherent form with statistical and biopolitical technologies that, like naturalist works, proliferated details. Naturalism in this American context isn’t a failed version of realism. Nor does its incoherence register an inability to represent empire as an emergent global order. Instead, naturalism developed the realist project with a set of conventions that powerfully (and problematically) expressed the form of an emerging and efficacious large-scale global order.
Highly original and insightful, Billig and Marinho's book investigates how politicians misuse official statistics. Setting this problem in its historical context – and offering vivid case studies of Donald Trump, Boris Johnson and Gérald Darmanin – the authors demonstrate that the manipulation of statistics involves the misuse of words as well as the misuse of numbers. Most importantly, the authors show that politicians will manipulate official statisticians to produce politically convenient, but statistically inappropriate, numbers. Another unique part of the book is that the authors are not content with analysing how statistics are manipulated, but they also rigorously analyse the efforts of statistical agencies in France and Britain to combat such manipulation. The chapters herald unsung heroes who operate largely 'behind the scenes' to expose and oppose the corruption of statistics. An indispensable read for anyone concerned with the intersection of power and data.
The discovery of anaesthesia transformed the human condition, and unplanned awareness returns a patient to the nightmare that was surgery before anaesthesia and effective analgesia. Significant advances in the pharmacology and technology of anaesthesia have still not brought reliable means of monitoring its depth much closer, although because awareness is such a serious complication, considerable research effort has been dedicated to the search for methods of detection. Some of these remain research tools or are not yet in widespread use, but you should have some idea about which of them may in due course find their way into clinical practice. Most current interest centres around bispectral index (BIS) monitoring, with recommendations both from the Association of Anaesthetists and from NICE, which are summarised in this section, and it is likely that the oral will focus more on BIS than on the other technologies.
When using machine learning to model environmental systems, it is often a model’s ability to predict extreme behaviors that yields the highest practical value to policy makers. However, most existing error metrics used to evaluate the performance of environmental machine learning models weigh error equally across test data. Thus, routine performance is prioritized over a model’s ability to robustly quantify extreme behaviors. In this work, we present a new error metric, termed Reflective Error, which quantifies the degree at which our model error is distributed around our extremes, in contrast to existing model evaluation methods that aggregate error over all events. The suitability of our proposed metric is demonstrated on a real-world hydrological modeling problem, where extreme values are of particular concern.
This chapter provides an introduction to the main themes of the book and why this is a book about the misuse of language, just as much as the misuse of numbers. Statistics are never just numbers, for the numbers have to be labelled. Because politicians are so distrusted at present, people expect politicians to manipulate statistics. This opening chapter introduces readers to a number of excellent recent books about statistics, most of which have been addressed to non-specialist readers. The topic of statistics is a broad one and can sustain a variety of books with different slants. Unlike other books on statistics, this one looks directly at manipulation and how it occurs. A recurring theme of the book is that the political manipulation of statistics is not typically a single act, but politicians will often manipulate their statisticians to manipulate the official statistics on their behalf. This opening chapter also comments on the book’s style of writing. The authors write aim to write in a clear and non-technical way, and to give special emphasis to the ways that politicians manipulate language when manipulating numbers.
A common and unfortunate error in statistical analysis is the failure to account for dependencies in the data. In many studies, there is a set of individual participants or experimental objects where two observations are made on each individual or object. This leads to a natural pairing of data. This editorial discusses common situations where paired data arises and gives guidance on selecting the correct analysis plan to avoid statistical errors.
The Japanese mass media has been reporting a rising number of foreign workers in Japan based on data published by the Ministry of Health, Labour and Welfare. The Ministry's data, however, is neither derived from a comprehensive database of foreign workers, nor is it a credible source of information about the foreign workforce. This paper explains how the ministry arrives at its figures, why the reported rapid increase over the past decade is incorrect, and pinpoints flaws in its data collection process. Finally, we suggest a new approach for estimating the total number of foreign workers in Japan at a time when the Japanese government has proposed a significant increase in the number of foreign workers.
Stefanie Markovits’ chapter thinks about counting and accountability, and how they inform literary representations of the military man, one of the most visible of war’s outcomes in mid-Victorian Britain. Markovits reflects on this period as one which saw ‘the rise of statistics as a discipline of social science and a method of statecraft in Britain’, and with it the growing need for accountability in public affairs. The figure of the soldier is both hero and statistic, individual and number, in a period where fiction, philosophy, and popular commentary, were preoccupied with how individuals realised their fully individualised potential. The soldier’s cultural and political potency is enabled because his being is aligned with the numbers that account for him. In the work of Tennyson, Harriet Martineau, and Dickens we see how ‘the mid-century soldier becomes such a potent figure precisely because his “type” aligns so closely with numbers’.
This enthusiastic introduction to the fundamentals of information theory builds from classical Shannon theory through to modern applications in statistical learning, equipping students with a uniquely well-rounded and rigorous foundation for further study. Introduces core topics such as data compression, channel coding, and rate-distortion theory using a unique finite block-length approach. With over 210 end-of-part exercises and numerous examples, students are introduced to contemporary applications in statistics, machine learning and modern communication theory. This textbook presents information-theoretic methods with applications in statistical learning and computer science, such as f-divergences, PAC Bayes and variational principle, Kolmogorov's metric entropy, strong data processing inequalities, and entropic upper bounds for statistical estimation. Accompanied by a solutions manual for instructors, and additional standalone chapters on more specialized topics in information theory, this is the ideal introductory textbook for senior undergraduate and graduate students in electrical engineering, statistics, and computer science.
Three Stanford-educated Chicanos took the stand in support of MAS, and these witnesses were central in Judge Tashima’s final ruling. Specifically, they detailed in a scholarly way the academic integrity of the department, the efficacy of taking the classes, and also demonstrated how state representatives used racist “code words” in cementing their opposition to the program. We detail their times testifying, how the state desperately tried to trip them up.
This chapter links the creation of MAS to the historical creation of Ethnic Studies – setting the record straight on the nature of this type of education amidst massive amounts of local and national misinformation. It details what MAS was, the effects of the program on student academic success, while examining how critically engaged, educated Mexican American students came to be seen as such a “threat” to the state.
In this article, we present the findings of an oral history project on the past, present, and future of psychometrics, as obtained through structured interviews with twenty past Psychometric Society presidents. Perspectives on how psychometrics should be practiced vary strongly. Some presidents are psychology-oriented, whereas others have a more mathematical or statistical approach. The originally strong relationship between psychometrics and psychology has weakened, and contemporary psychometrics has become a diverse and multifaceted discipline. The presidents are confident psychometrics will continue to be relevant but believe psychometrics needs to become better at selling its strong points to relevant research areas. We recommend for psychometrics to cherish its plurality and make its goals and priorities explicit.
Focusing on methods for data that are ordered in time, this textbook provides a comprehensive guide to analyzing time series data using modern techniques from data science. It is specifically tailored to economics and finance applications, aiming to provide students with rigorous training. Chapters cover Bayesian approaches, nonparametric smoothing methods, machine learning, and continuous time econometrics. Theoretical and empirical exercises, concise summaries, bolded key terms, and illustrative examples are included throughout to reinforce key concepts and bolster understanding. Ancillary materials include an instructor's manual with solutions and additional exercises, PowerPoint lecture slides, and datasets. With its clear and accessible style, this textbook is an essential tool for advanced undergraduate and graduate students in economics, finance, and statistics.