Categories: Latest News

Activote: 2024 Most Valuable Pollster (MVP) Rankings

Activote: 2024 Most Valuable Pollster (MVP) Rankings

by Gotham Polling

During the 2024 election season, thousands of polls were published by well over a hundred different pollsters. Some polls got widespread attention, either through a collaboration with a media outlet (e.g. New York Times/Siena College), some because of the reputation of the pollster (e.g. Quinnipiac), and some because of an outlier result (e.g. Selzer & Co).

Many polls were discussed on X/Twitter, criticized from the left if the result seemed good for the right, or criticized from the right if the result seemed good for the left.  With all of that noise, it can be difficult for ordinary Americans to know which polls(ters) to trust.

Through our 2024 Most Valuable Pollster rankings, we aim to provide the “scorecard” to indicate which pollsters did well in 2024, and which pollsters did not, so that for the next election cycle (2025/2026), the 2024 results can be taken into account when interpreting the polls.

The gold standard of pollster ratings, used to be FiveThirtyEight, later rebranded as ABCNews/538. Unfortunately, Disney/ABC shut 538 down in March 2025 before updated pollster ratings could be published.

There are currently no high quality alternatives available. 

To fill the gap left by 538, ActiVote has developed the 2024 Most Valuable Pollster rankings with the goal to include all pollsters who actively participated in the 2024 election cycle. 

MVP vs. Hall of Fame

There are essentially two ways to grade pollsters: one is to focus on the results of the last election cycle (2024), which is what we do in the 2024 MVP Rankings. Another is to look at the “life-time achievements” of pollsters (akin to the Hall of Fame in sports) which should include all polls published over the course of many election cycles.

It has been customary for 538 to include all polls from recent cycles, going back to at least 2016. The disadvantage of such an approach is that it tells us more about who used to be great or not so great at some point in time than that it tells us who is doing well right now

We prefer to focus on the results of a single “election season”. Here is why:

  1. Each Election Season is Different. There is a significant difference between polling in a presidential year (2016, 2020, 2024) and in a midterm year (2014, 2018, 2022). Presidential polls (both state-wide and national popular vote) are between candidates that every one has heard about in an election with a high turnout. In 2024 about two-thirds of all polls were presidential polls and only one-third were senate, gubernatorial or congressional polls. In a midterm election there is a larger diversity of polled races due to the absence of a presidential election and the actual turnout is a bigger wildcard as it is typically closer to just 50% of the electorate. Thus, it is possible that some pollsters do relatively better in midterms, while others do relatively better in presidential years.
  2. Relevance. We believe that if we were evaluating 2024 in sports, we should not include Michael Jordan and Bill Russell (basketball), Babe Ruth and Hank Aaron (baseball), or Jack Nicklaus and Tiger Woods (golf). They are all-time greats, but they are not relevant for the 2024 season. Instead, we should see names like Nikola Jokic, Shohei Ohtani and Scottie Scheffler. Similarly, when evaluating an election season, we should look at how pollsters did in 2024 instead of blurring the picture by mixing in the results of many previous election cycles.

Therefore, in our MVP rankings we exclusively look at performance during the 2024 general election cycle, ignoring past cycles.  

Which Polls are included?

We have included all presidential polls, both nationally and state-wide (and district-wide for Nebraska and Maine), all senate polls, all gubernatorial polls and all congressional polls.

Then, we selected only polls that had their median field date in the last 30 days of the election cycle (Monday October 7th till Tuesday November 5th). The reason for this cutoff is that polls which were conducted (much) earlier may have correctly captured the electorate’s opinion at that point in time, but that opinion may be different than the opinion near election day. Specifically, it is possible that a pollster in August 2024, only weeks after Joe Biden dropped out, captured correctly that Kamala Harris led in the popular vote, while they again captured correctly in late October that Donald Trump had retaken the lead in the popular vote. That pollster should not be penalized for publishing the August Harris lead based on the ultimate election result, which was Trump +1.47.

The 30-day window is in line with windows used by 538 (they were planning to use 31 days for this cycle and have used 21 days in the past).

This gave us a total of 1,349 polls from 136 pollsters.

Please note that this scope is mostly aligned with the selection as previously made by 538, with the exception that we do focus only on general election polls. Primary election polls typically have significantly higher error rates and therefore pollsters who would focus heavily on primary elections would be at a disadvantage compared to those who choose the safety of only polling general election races.

Some Races are Harder than Others

It is well known that some races are harder to poll than others. For instance, the presidential race in the 7 swing states (AZ, GA, MI, NC, NV, PA, WI) are easier to poll than the 8 gubernatorial races that were on the ballot in 2024 (IN, MO, MT, NC, NH, UT, VT, WA): the 494 presidential swing state polls had an average error of 2.59% over all pollsters, while the 48 gubernatorial polls had an average error of 4.77%, even though there was a significant overlap between the organizations who executed these polls: 54% of the swing state polls and 77% of the gubernatorial polls were executed by 18 organizations that polled both types of races. Thus, if we would simply average the overall error of all polls, organizations who lean more heavily into polling difficult races, would be disadvantaged compared to those who stick to the “safe” presidential swing states.

To take those differences into account, we have split all polls into 6 categories of races. The following table shows the race categories, the number of pollsters who published at least 1 poll in that category, the total number of polls for that category, and the average error of all polls in that category.

 

The table shows that swing state polls were the most abundant (494) and the most accurate with an average error of just 2.59%. District polls (presidential poll for NE-1, NE-2, NE-3, ME-1 and ME-2, as well as any congressional district poll) were more scarce (78) and much less accurate with an average error of 5.52%. Thus, comparing a presidential swing state poll with a 4% error (not very good) with a congressional poll with a 4% error (much better!) would be comparing apples and oranges.

For that reason, in our methodology, we start by ranking each of the pollsters in each of the 6 categories listed above, before we combine those rankings to come to an overall ranking.

Comparing Rankings as Percentages

As a first step of creating a ranking for a particular category of races, we calculate the average error per pollster of the polls they published in that category and then sort all pollsters from 1 to N based on their average error. This leads to a ranking from 1 to 27 for the pollsters who published gubernatorial polls, and a ranking from 1 to 78 for the pollsters who produced senate polls.

In order to make these rankings comparable between categories, we translate them into percentages where the percentage means: “percentage of pollsters that are better than a given pollster”. Thus, 0% would be the perfect score in that no other pollster is better, and 100% would mean being the worst pollster in a category as everyone is better.

To see how we came to the formula for translating a ranking to a percentage (the lower the percentage, the better), consider the following example: suppose pollster A ranked 14th among the 27 gubernatorial pollsters, while pollster B ranked 14th among the 78 senate pollsters. 

 

Pollster A is in the middle of the field of gubernatorial pollsters, while pollster B is in the top 1/5th  of the senate pollsters. Thus, 14th in one category is not the same as 14th in another category. Therefore, we would like to see a percentage for pollster A of around 50%, while for pollster B we would like to see something around 18%.

As a formula for the percentage associated with being ranked K out of N we use K/(N+1). Thus, 14th among 27 is 14/28 = 50%, while 14th among 78 is 14/79 = 18%.

Please note that this formula ensures that only if there are a (near) infinite number of pollsters could someone really score 0% or 100%. For a category with fewer pollsters the first and last will approach 0% and 100% but not entirely reach it. This formula ensures that if a pollster would be alone in a category, they would neither be ranked 0% or 100%: they would get ½ = 50%, indicating that we really don’t know how good they were as there is no comparison with others.

Calculating MVP over All Categories

Suppose pollster A is ranked 10% (good!) for their 5 presidential swing state polls and ranked 50% (middle of the pack) for their 1 national presidential poll. Pollster B is ranked 5% (excellent!) for their single national presidential poll and 40% (slightly better than average) for their 5 senate polls. The question is now which pollster has done better.

 

If we would simply average the category performances then for pollster A we would calculate the average of 10% and 50% which equals 30%, and for pollster B we would calculate the average of 5% and 40% which equals 22.5%, and we would incorrectly conclude that B has done better. 

However, A had 5 polls that were good on average (10% score) and only 1 that was middle of the pack (50% score). B had one excellent poll (5% score) and 5 that were only slightly better than average (40% score). That needs to be reflected. 

For that reason we take the weighted average of the category performances. Thus, for A it is (5*10%+1*50%) / 6 = 17% and for B it is (1*5% + 5*40%)/6 = 34%. Thus, this calculation ensures that A is (correctly) deemed the more accurate pollster.

This weighted average leads to the overall accuracy of each pollster.

Pollster Prolificacy

A final input to our MVP rankings, is ranking the prolificacy of each pollster. The most prolific pollster was AtlasIntel with 124 polls. The least prolific pollsters are a group of 31 pollsters who published only a single poll in the last 30 days of the 2024 election cycle.

If we would only compare the individual accuracy of each of those 31 pollsters based on their single poll, it is highly likely that at least one of them will have hit the jackpot with an accurate poll. This year’s most accurate single poll pollster was National Public Affairs who published a single “Trump +2” poll for Georgia, which was only 0.19% off. 

However, the group of 31 on average did much worse than AtlasIntel: on average they were 4.14% off, with only 7 out of 31 beating AtlasIntel’s average polling error of 1.38%.

To understand why it would be an error to rank National Public Affairs (1 poll with 0.19% error) above AtlasIntel (124 polls with average 1.38% error), we can look at an intuitive comparison with basketball.

Suppose Steph Curry takes 124 free throws, in competition with 31 amateurs who each get to take 1 free throw each. Let’s say Steph sinks 115 of his shots, or 93% of his attempts and misses just 9 shots, in line with his 2024/2025 free throw average of 93%. Suppose that the amateurs collectively sink 30% of their shots. That means that 9 or so would make their single attempt, while 22 would miss.

If we now were to rank the 32 people (31 amateurs and Steph Curry) by scoring average we would get 9 amateurs with 100% tied for 1st place, followed by Steph Curry in 10th place with 93%, followed by 22 amateurs tied for 11th place with 0%. Publishing a ranking with Steph Curry in 10th place would clearly be a mistake! Similarly, ranking National Public Affairs ahead of AtlasIntel would be a mistake: there were bound to be some pollsters among the 31 with a single poll who would do well, and National Public Affairs happened to be one of the 7 lucky ones this year. 

Please note that we do not have any quality opinion about National Public Affairs: we are just making the argument that their single poll provides insufficient proof that they are a top-rated pollster and that therefore, in order to be ranked higher, they would need to show a larger number of good polls. 

With all of this in mind, we rank pollsters on prolificacy as well, using the same method as for polling results per category: the percentage indicates how many pollsters were more prolific and ranges from 1% (AtlasIntel) to 89% (for the group of 31 who published only 1 poll). 

The final 2024 MVP ranking is based on the average of the prolificacy and accuracy of a pollster.

Detailed Results

We present detailed results in 7 groups of up to 20 pollsters, sorted by final “Score”, as average of “Prolificacy” and “Accuracy”, followed by the 6 categories where for each we show their percentage ranking in that category followed by the number of polls they published in that category.

As an example, if we look at the first line: number 1 is AtlasIntel, with 124 polls and an overall score of 8.4% as average of their prolificacy score (0.7%) and accuracy score (16.1%). They got to the accuracy of 16.1% as the weighted average of 10 presidential national popular vote polls with 10% score, 70 presidential swing state polls with 15% score, 10 presidential polls in non-swing state, with a 14% score, 29 senate polls with a 19% score, and 5 governor polls with a 29% score. They did not publish any district polls.

 

 

 

 

 

 

 

Analyzing The Results

The Undisputed MVP

AtlasIntel is without a doubt the Most Valuable Pollster for 2024. They produced the largest number of polls (124) in 5 categories, with excellent results in four of those categories (10%, 14%, 15%, 19%) and a fine result (29%) in the final category.

The comparison with other pollsters who produce 60+ polls is striking: Siena/NYT with 72 polls in 18th place, Morning Consult with 90 polls in 29th place, and YouGov with 84 polls in 37th place, showing that producing many polls and doing so accurately is a rare feat.

Successful Focus on Easier to Poll Races

The 3 categories of polls with the lowest average error are presidential swing state polls (2.59% average error), presidential national popular vote polls (2.86% average error) and senate polls (3.94% error).

Several top-10 pollsters focused (almost) entirely on these three categories and did so accurately: InsiderAdvantage (27 of their overall 28 polls) in 2nd place, OnMessage (16 of their overall 17 polls) in 3rd place, Rasmussen (34 of their 39 polls) in 4th place, Trafalgar Group (28 of their overall 30 polls) in 5th place, Patriot Polling (all 17 polls) in 6th place, and Fabrizio/McLaughlin (all 7 polls) in 9th place.

Please note that these pollsters will therefore also have a low overall average error over all their polls.

Successful Versatile Pollsters

Emerson (7th place) published polls in all 6 categories, with 21 of 53 polls in the more difficult to poll race categories.  ActiVote (8th place) published polls in 5 categories with 24 of its 52 polls in the more difficult to poll race categories. 

Thus, Emerson and ActiVote earned their rankings by being versatile: not by being exceptional in one particular area, but by being good in most categories. Please note that polling in the other categories was on average more difficult, and therefore Emerson’s and ActiVote’s overall average error will be slightly higher than pollsters who especially focused on easier races to poll.

Former Top-15 Pollsters

In January 2024, ABC/538 published their updated pollster ratings. We have listed their top-15 pollsters in the table below. Please note that the ABC/538 pollster ratings were akin to “life-time achievement” ratings in the sense that they were based on polls in several recent election cycles, while our 2024 MVP rankings are based on 2024 general election polls only. 

 

A relevant question is in how far such “life-time achievement” ratings are predictive of future performance. In the table below we can see that for the 2024 general election the answer is: not very much at all: 

  • Only two of the ABC/538  top-15 pollsters retained a position in our top-15: Emerson (going from 8th to 7th) and Suffolk (going from 6th to 12th).
  • Three more of the ABC/538 top-15 pollsters stayed close to the top-15: Siena/NYT dropped from 1st to 18th, The Washington Post dropped from 2nd to 21st and Marquette Law School dropped from 3rd to 19th.
  • The other 10 ABC/538 top-15 pollsters all dropped to 33rd or below, on average landing at 63rd place in our MVP rankings.
  • Of the prolific pollsters of the ABC/538 top-15 pollsters (Emerson, Siena/NYT and YouGov), Emerson did well: with an overall error of 2.85% while polling some of the more difficult races. As a result, they moved to 7th place in our MVP rankings. Siena/NYT polled almost a point less accurately (3.80%) and dropped from 1st to 18th place, while YouGov was another point less accurate (4.76%), dropping to 37th place overall in our 2024 MVP rankings.
  • A group of 5 pollsters (Monmouth, Data Orbital, Muhlenberg, University of North Florida and Selzer) hardly participated in 2024’s polling frenzy with just 12 polls between the five of them. They dropped from on average 10th place to on average 87th place
  • The one poll published by Selzer was also the biggest miss of formerly top-rated pollsters with a 16.2% error, landing Selzer dead last in our 2024 MVP rankings.

We look forward to publishing the 2026 Most Valuable Pollster rankings after the 2026 midterm elections.

Masab

Recent Posts

FOX 45 NEWS: Zohran Mamdani faces backlash for past posts about former al-Qaida leader

by Gotham Polling WASHINGTON (TNND) — As the race for New York City's mayor intensifies, Democratic…

5 days ago

NEW YORK POST: Eric Adams seeks to register 1M new voters to defeat socialist Zohran Mamdani in NYC mayoral election

by Gotham Polling Mayor Eric Adams wants to register 1 million new voters in his uphill…

5 days ago

Poll: NYC Mayor 2025 General Election – July

Gotham Poll: Mamdani Leads in 3-Way Race — with Cuomo and Adams splitting the moderate…

6 days ago

Newsweek: Zohran Mamdani Gets Major Polling Boost in New York Mayoral Race

by Gotham Polling Zohran Mamdani is leading other candidates by double-digits in New York City's…

6 days ago

NEW YORK POST: Pro-Eric Adams poll shows mayor trailing Cuomo by double digits — but Mamdani still beats them both

by Gotham Polling A stunning poll aimed at drumming up support for Mayor Eric Adams’…

1 week ago

THE CITY: Why Am I Getting So Many NYC Election Texts?

Sampling is the backbone of polling, determining how well a subset of people (the sample)…

2 months ago