Trends We are Watching in 2026

The start of a new year is always a good time to assess how trends from the past may affect the future. Here are a few threads we are tracking and believe could shape survey research in 2026.

Polling accuracy will likely improve in the 2026 midterm elections relative to polling in recent presidential election years, but risks bias in favor of Republicans.

In the era of partisan polarization by educational attainment, the increased correlation between education and Republican vote preference on one hand, and the longstanding correlations among education, turnout, and survey response on the other, has been one of several major factors in driving under-estimates of Republican vote share in high turnout elections. While the magnitude of the polling error in 2024 was smaller than in 2020 or 2016, a third consecutive under-estimate of Trump vote has given many in the survey research community substantial concern.

However, in that same time period, polling in midterms has (on average) performed much better. Polling for the House, Senate, and Governor races were substantially more accurate than for the same contests in 2016 and 2022. One likely reason for this difference is that the low-turnout-propensity and low-response-propensity Trump supporters who failed to show up in polls before Presidential numbers proportional to their actual turnout are even less likely to vote in midterm elections, a pattern we anticipate will hold in 2026, based on special election results to date

But to the extent that there is bias, which way will it lean? Based on the trends in the survey industry, we anticipate a small pro-Republican bias in the polling, for two reasons. 

First, retrospective vote weighting, in which respondents are weighted on the basis of their self-reported vote choice in the previous election to match the results of the previous election, has become more common within US election polling. Both how to do such weighting, as well as the wisdom of this approach is still a matter of some debate within survey methodology circles, with some showing data that it improved their estimates and others showing that it would have made them worse

But while there is debate on the effect on accuracy, there is less debate on the effect on bias. We predict that the spread of this practice will likely make estimates generally more Republican, as was true in both examples above. The reason for this is the challenge of knowing what reference levels to weight the data against. At first glance, it may seem straightforward: just look at prior election results. But the electorate in 2024 is not likely to match exactly the electorate in 2026. These differences arise from the turnout patterns noted above, but also voters aging in and out of the electorate. Some have adopted a more sophisticated approach to retrospective vote weighting, creating predictive models and weighting on those predictions instead of just self reports, but this appears to be a relatively less common version of the practice. In the absence of such an approach, retrospective vote weighting will tend to make estimates more Republican. This practice may make some results more accurate, but we believe that in the aggregate the usage of recalled vote weighting after a Republican win will tend to generate a slight Republican lean. 

An additional reason to expect a slight Republican lean is less methodological than sociological. Pollsters of all types, partisan and non-partisan alike, tend to want to be correct. Take, as an example, this polling post-mortem from Democratic-aligned pollster Data for Progress. There is a lot of flexibility in how to weight survey data, in ways that can meaningfully shift estimates, as this experiment from the New York Times in 2016 shows. We anticipate that, in the aggregate, pollsters will tend to over-correct from their 2024 slight Democratic bias (on average), and produce a slight Republican bias as a result. 

The 2025 off-cycle polling already showed some of these patterns in effect. While a handful polls were relatively close in the Virginia Governor’s race, polling averages dramatically underestimated support of the Democrat in other Virginia statewide races, as well as in the New Jersey gubernatorial contest and (to a lesser extent) the two-way vote share for Mamdani over Cuomo in the New York City mayoral election. And some of the pollsters with a pro-Republican house effect that produced lower error in 2024, like AtlasIntel which uses an online river sampling method, produced some of the largest errors in the 2025 elections. We anticipate that, compared to 2025, misses will be somewhat smaller in magnitude on average in the somewhat higher turnout contests in 2026, but on average in the same direction.

Data Source: UVa Center For Politics

That said, we also anticipate that the Trump White House and allies in the media will systematically attack polling and research as “fake news” as their polling numbers get even worse and as predictions of losses mount, all while claiming that they are more popular than ever

Synthetic respondents will make a major impact on the survey industry in several ways, mostly for the worse but possibly with positive secondary effects.

We have written before about synthetic samples – that is, simulated respondent data produced by LLMs answering questions after being fed personas of particular respondents – and why we view such response predictions as poor substitutes for real research, particularly when posing questions that are outside of the model training data. However, those limitations have not stopped new firms from offering such simulated responses for market research or even political research. Use of these tools has just become even easier, with Qualtrics now offering synthetic panels easily accessible through their survey software. And as Sean Westwood’s recent article has pointed out, LLM-powered bots are increasingly capable of evading some standard practices for identifying them, raising the concern that online opt-in panels will soon be filled with bots as well. 

We see three ways in which such LLM-powered artificial respondents will change (and are already changing) the survey industry. We anticipate that the widespread availability of LLM-powered respondent bots will create a bifurcation in the online panel industry. On one hand, more firms will offer (and are offering now) synthetic respondents as a service, because it is so cheap, and because some customers either do not understand or care about the difference. At the same time, there will be increased pressure by those who do understand and care about the difference for panel providers to ensure that their respondents are not LLMs, through strategies like IP-address verification, identity verification (like matching to voter files), open-ended responses, probability recruitment using postal mail or text messages, and even “proof of life” verification by live video. Which is to say that online panels will not die in 2026, but will become more expensive as the incentives to respondents increase proportional to the work they are asked to perform. 

It also strikes us as very plausible that there is a major scandal this coming year where someone (probably a relatively new entrant into the marketplace) gets caught passing off synthetic sample generated by LLMs as actual survey responses. Such scandals existed even before the advent of large language models, and with LLMs making the process of faking data easier, we are afraid they will be used to commit intentional fraud. To be clear, we do think this kind of fraud can be caught. LLMs have a number of tells, such as the inability to generate plausible data for out-of-sample questions (topics that haven’t been polled before and do not exist in the training data), linguistic tells on open-ended responses and an unwillingness to respond “don’t know.”

Finally, there may be secondary effects from this development that are beneficial, namely that firms not using LLMs will take more steps to demonstrate that responses come from human survey participants, in order to avoid falling into the trap described by Ackerlof's Market for Lemons or Gresham’s Law. That is, if consumers cannot distinguish good products from bad, sellers of good products will be driven out of the marketplace as sellers of the bad products are willing to take a lower price. But the condition for this scenario is asymmetric information, where sellers know the true quality of the product but buyers do not. This dynamic will make it more important than ever for firms to take steps to prove the validity of their survey responses. And there are ways to do that, from voter-file-based or voter-file matched samples, to more original questions outside the LLMs training data, to increased use of open-ended questions (and the sharing of those data), among other options. Ironically, LLMs may be helpful in this work, as there is evidence that, in some circumstances, LLMs can streamline the process of categorizing open text responses. This result would be a particularly welcome development. 

The use of multimodal fielding methods will continue to grow, particularly for sub-national surveys.

The last several cycles have seen a growth in mixed-mode fielding. For instance, Pew Research has documented a growth in mixed methods at the firm level, with 39% of firms using multiple fielding methods in 2022 (their most recent year) compared to just 16% a decade earlier. We have also seen this pattern in our own analysis of pre-election polling in 2020 and 2024. Notably, while the use of opt-in panels as a stand alone mode grew in nationwide polling from 2020 to 2024, it dropped as a standalone mode for state-level polling, as did live caller polling, as a proportion of all election surveys the month before the election.

We anticipate that this trend will continue in 2026 for several reasons. First, while opt-in online panels may have enough respondents in some large states, like Texas, many of the most competitive statewide elections that generate the most polling interest are likely to be in medium-to-small states, such as North Carolina, Iowa, and Maine. 

Second, the factor driving pollsters away from live calls as a standalone method is not going away. The fundamental problem is that falling response rates mean more interviewer hours (and therefore, more cost) are required for each respondent successfully interviewed. This reality makes productivity-multiplying tools, like text messages which produce order(s) of magnitude more responses per interviewer hour, attractive as alternatives or complements. 

Finally, as our own research has shown, multimodal research can produce superior outcomes to monomodal research. In a study in the 2023 Kentucky Governor’s race, we found that while text-to-web outperformed live calls on its own, accuracy increased when using phones as a supplement to texting to backfill hard-to-interview strata. And in our three-state experiment in 2024, we found that while texting produced more accurate and less biased estimates than did samples from an opt-in panel reseller, we produced more accurate estimates from opt-in online samples when using the probability-sampled text-to-web interviews to inform the weighting of the opt-in samples.

Next
Next

Research We Read in 2025 And Just Can’t Stop Thinking About