Some interesting papers, from all over the map....
Simplicity and reality in computational modeling of politics
Computational & Mathematical Organization Theory, March 2009, Pages 26-46
Modeling a polity based on viable scientific concepts and theoretical understanding has been a challenge in computational social science and social simulation in general and political science in particular. This paper presents a computational model of a polity (political system) in progressive versions from simple to more realistic. The model, called SimPol to highlight the fundamental structures and processes of politics in a generic society, is developed using the combined methodologies of object-based modeling (OOM), the Unified Modeling Language (UML), and the methodology of Lakatos' research programs. SimPol demonstrates that computational models of entire political systems are methodologically feasible and scientifically viable; they can also build on and progress beyond previous theory and research to advance our understanding of how polities operate across a variety of domains (simple vs. complex) and levels of analysis (local, national, international). Both simple and realistic models are necessary, for theoretical and empirical purposes, respectively.
Presidential and Congressional Vote-Share Equations
American Journal of Political Science, January 2009, Pages 55-72
Three vote-share equations are estimated and analyzed in this article, one for presidential elections, one for on-term House elections, and one for midterm House elections. The sample period is 1916-2006. Considering the three equations together allows one to test whether the same economic variables affect each and to examine various serial correlation and coattail possibilities. The main conclusions are (1) there is strong evidence that the economy affects all three vote shares and in remarkably similar ways; (2) there is no evidence of any presidential coattail effects on the on-term House elections; (3) there is positive serial correlation in the House vote, which likely reflects a positive incumbency effect for elected
representatives; and (4) the presidential vote share has a negative effect on the next midterm House vote share, which is likely explained by a balance argument.
Dynamics of the presidential veto: A computational analysis
John Duggan, Tasos Kalandrakis & Vikram Manjunath
Mathematical and Computer Modelling, November 2008, Pages 1570-1589
We specify and compute equilibria of a dynamic policy-making game between a president and a legislature under institutional rules that emulate those of the US Constitution. Policies are assumed to lie in a two-dimensional space in which one issue dimension captures systemic differences in partisan preferences, while the other summarizes non-partisan attributes of policy. In any period, the policy choices of politicians are influenced by the position of the status quo policy in this space, with the current policy outcome determining the location of the status quo in the next period. Partisan control of the legislature and presidency changes probabilistically over time. We find that politicians strategically compromise their ideal policy in equilibrium, and that the degree of compromise increases when the
opposition party is more likely to take control of the legislature in the next period, while politicians become relatively more extreme when the opposition party is more likely to control the presidency. We measure gridlock by (the inverse of) the expected distance of enacted policies from the status quo in the long run, and we show that both gridlock and the long run welfare of a representative voter are maximized when government is divided without a supermajority in the legislature. Under unified government, we find that the endogeneity of the status quo leads to a
non-monotonic effect of the size of the legislative majority on gridlock; surprisingly, under unified government, gridlock is higher when the party in control of the legislature has a supermajority than when it has a bare majority. Furthermore, a relatively larger component of policy change occurs in the non-partisan policy dimension when a supermajority controls the legislature. We conduct constitutional experiments, and we find that voter welfare is minimized when the veto override provision is abolished and maximized when the presidential veto is abolished.
A Computational Model of the Citizen as Motivated Reasoner: Modeling the
Dynamics of the 2000 Presidential Election
Sung-youn Kim, Charles Taber & Milton Lodge
Stony Brook University Working Paper, October 2008
We develop a computational model of political attitudes and beliefs that incorporates contemporary theories of social and cognitive psychology with well-documented findings from electoral behavior. We compare this model, John Q. Public (JQP), to a Bayesian learning model via computer simulations of empirically observed changes in candidate evaluations over the course of the 2000 presidential election. In these simulations, JQP clearly outperforms the Bayesian learning model. In particular, JQP reproduces responsiveness, persistence, and polarization of political attitudes, while the Bayesian model has difficulty accounting for persistence and polarization. We demonstrate that motivated reasoning - the discounting of information that challenges prior attitudes coupled with the uncritical acceptance of attitude-consistent information - is the reason our model can better account for persistence in candidate evaluations over the course of the campaign. Two implications follow from the comparison of models: (1) motivated reasoning explains the responsiveness, persistence, and polarization of political attitudes, and (2) any learning model that does not incorporate motivated reasoning will have difficulty accounting for the persistence and polarization of political attitudes.
Modeling a Presidential Prediction Market
Keith Chen, Jonathan Ingersoll & Edward Kaplan
Management Science, August 2008, Pages 1381-1394
Prediction markets now cover many important political events. The 2004 presidential election featured an active online prediction market at Intrade.com, where securities addressing many different election-related outcomes were traded. Using the 2004 data from this market, we examined three alternative models for these security prices, with special focus on the electoral college rules that govern U.S. presidential elections to see which models are more (or less) consistent with the data. The data reveal dependencies in the evolution of the security prices across states over
time. We show that a simple diffusion model provides a good description of the overall probability distribution of electoral college votes, and an even simpler ranking model provides excellent predictions of the probability of winning the presidency. Ignoring dependencies in the evolution of security prices across states leads to considerable underestimation of the variance of the number of electoral college votes received by a candidate, which in turn leads to overconfidence in predicting whether that candidate will win the election. Overall, the security prices in the Intrade presidential election prediction market appear jointly consistent with probability models that satisfy the rules of the electoral college.
Optimal Gerrymandering in a Competitive Environment
MIT Working Paper, December 2008
We analyze a model of optimal gerrymandering where two parties receive a noisy signal about voter preferences from a continuous distribution and simultaneously design districts in different states and in which the median voter in a district determines the winner. The form of the optimal gerrymander involves "slices" of extreme right-wing voters that are paired with "slices" of left-wing voters, as in Friedman and Holden (2008). We also show that, as one party controls the redistricting process in more states, that party designs districts so as to spread out the distribution of
district median voters from a given state.
A mathematical model of Athenian democracy
Social Choice and Welfare, December 2008, Pages 537-572
It is shown that the representative capacity of democratic institutions selected by lot (=lottery), as it has been practiced in Athens in 594-322 BC, is quite high. For this purpose, People's Assembly, Council of 500, Committee of 50 with its President, juries, and magistrates are evaluated with indicators of popularity, universality, and goodness. The popularity is a spatial characteristic of representativeness, the average percentage of the population whose opinion is represented on a number of questions. The universality is a temporal aspect of representativeness, the frequency of cases (percentage of questions) when the opinion of a majority is represented. The goodness is the specific representativeness, that is, the average group-represented-to-majority ratio. In particular, it is shown that the size of Athenian representative bodies selected by lot was adequate to guarantee their high representativeness. The background idea is the same as in Gallup polls of public opinion and in quality control based on limited random samples.
(Nod to KL)