User Tools

Site Tools


scoutradioz:what_is_spr

What is Scouter Performance Rating (SPR)?

You might be wondering, “What is this 'SPR' statistic? And how can you measure a scout's accuracy (or amount of inaccuracy) if you don't have specific data points to compare against?”

Glad you asked! SPR is a metric intended for Scouting Leads and mentors to use to help monitor how well the scouts on the team are doing - who is generally being accurate, and who might be having trouble and/or could use some additional attention or assitance.

Background: OPR

SPR is based on another metric often used in FRC analytics - OPR, or Offensive Power Rating. TBA has an excellent article on their site.

You should definitely go and at least skim that article! The TL;DR of OPR is basically,

  • Let's imagine every robot at a competition has some unknown “value” - 4, or 15.5, or 9.3, or whatnot - which roughly estimates how many points they add to their alliance's score in any match
  • Three different robots might have A, B, and C as their “values” respectively. So perhaps in a match where one alliance is those three robots, the total score of (say) 25 was the sum of the three values A + B + C, or A+B+C=25
    • Another match might have robots D, E, and F, and that alliance might have scored 35, D+E+F=35
  • Later on in the competition, maybe robots A, D, and E are in an alliance together… and the score was 30, A+D+E=30
  • If it's a district comp and there are 30 teams and 60 matches played… that's 30 unknowns and (by the end) 120 different equations
  • You can use linear algebra mathemancy to 'solve' for all 30 unknowns using a matrix representing the 120 equations

Keeping in mind that (a) robots don't actually have a fixed number of points each scores every match, and (b) by mid-comp the data is “overdetermined”, i.e. there are more equations than unknowns… the metric is still a handy [if very fuzzy!] comparative metric. The Blue Alliance shows these metrics in their 'Insights' tab, for example see OPR ratings at 2024's Newton field at Champs.

Using linear algebra to measure Scouter accuracy

The same process can be applied to scouts, their data & metrics, and the official results recorded by the field & FMS at competitions (and reported up to HQ and then back down through TBA to Scoutradioz).

Basically,

  • Let's imagine every scout in your scouting brigade has some unknown “error value” - 0.5, or 2, or 9, or whatnot - which roughly estimates how many points of error their scouting introduces in any match
  • Three different scouts might have U, V, and W as their “values” respectively. So perhaps in a match where those three scouts are watching one alliance, the FRC HQ value for the total score might have been 25; but when you calculate the per-robot scores based on the scouts values, they came out to 32… meaning the amount of error was 7 (|32-25|). So the three scouts together contributed 7 points of error via U + V + W, or U+V+W=7
    • Another match might have scouters X, Y, and Z, and in that match the difference between the total points calculated from the scouts data vs. what the alliance officially got via the data from HQ was 12, X+Y+Z=12
  • Later on in the competition, maybe scouts U, X, and Y are scouting the same alliance… and the difference was 5, U+X+Y=5
  • You'll end up with one unknown value per scout, with the same number of simultaneous equations as for OPR (e.g., if 60 matches then 120 equations)
  • The same linear algebra mathemancy can 'solve' for each scout's “error value”

One significant difference between OPR and SPR: While higher OPR is better, when you're measuring amount of error introduced it's lower is better. 🙂

Just as with OPR, this is very fuzzy! It's best interpreted as a relative comparison metric. Generally we tell folks to be happy if they're in the top 50% or top quarter of the list or so.

Another consideration: Not all scouts scout the same number of matches; some scouts who are in the rotation the whole event will have many matches, while others who are only there for a short while (or who maybe only jumped in briefly to help) might have many fewer matches in the dataset. A good rule of thumb is to be wary of the SPR of any scout with (say) less than 10 matches scouted - since there are fewer equations with that scout's “error value”, the algorithm may “overfit” their data and make it excessively high (or even excessively low, even very negative!)

How to make the most of SPR with Scoutradioz

As a starting point, Scoutradioz uses a few built-in guidelines for creating the “matrix” of per-scouter unknowns to solve:

  1. Your match form schema should have a derived metric called contributedPoints (this is also used for match predicitions)
  2. For each alliance in each scouted match, the system will have the three scouters' unknown “error values” on one side of the equation and on the other, the difference between (a) the sum of the contributedPoints from each robot's scouted data and (b) the official total points of the alliance minus the points awarded from fouls from the other team
    • The other team giving points to the alliance via fouls had nothing to do with the behavior of the robots the scouts were scouting, so that gets subtracted from the FRC alliance score
  3. Solving the resulting matrix yields the SPR of each scout!

For many FIRST FRC games, this is very straightforward and works 'out of the box'!

BUT: What do you do when a game has additional scoring elements which are not directly connected to an individual robot's actions? For example, in 2023 an alliance scored extra points at the end for how many “links” the alliance had created on the 'grid'. It was nearly impossible to track which robot completed a “link” in the heat of the match, and besides, it wasn't clear if that robot had did the whole link, or was just finishing a link, etc.

Customizing SPR

As of 2025 we've added the ability for Team Admins to customize how SPR is calculated, to help get around challenges like “links” in 2023 which were difficult or impossible to tie to a single robot.

When adding or updating the Match Form definition a Team Admin can now also customize if desired how the system will calculate SPR for their scouts. If you expand the 'twisty' below the Match Form JSON editor, you'll see the default setting like,

{
  "points_per_robot_metric": "contributedPoints",
  "subtract_points_from_FRC": {
    "foulPoints": 1
  }
}

This means,

  1. The system will calculate the total points from the scouts data using the contributedPoints metric, and
  2. It will subtract from the official alliance score (a) the value of the FRC schema foulPoints (b) multiplied by 1

Let's say in 2023 there was a data element in the FRC schema, linkCount; and each link was worth 5 points.

Teams might still have included a “workaround” for link scoring in contributedPoints (e.g., “how many links did the alliance get?” and then dividing that by 3, i.e. each robot did 1/3rd of the work for the links) - but now they could have also created a 2nd metric, e.g. sprPoints, specifically for SPR and only capturing points from what the robots did directly (i.e. not links). In which case the SPR calculation could be customized to,

{
  "points_per_robot_metric": "sprPoints",
  "subtract_points_from_FRC": {
    "foulPoints": 1,
    "linkCount": 5
  }
}

Now…

  1. The system will calculate the total points from the scouts data using the sprPoints metric (and not 'contributedPoints'), and
  2. It will subtract from the official alliance score (a) the value of the FRC schema foulPoints multiplied by 1 and (b) the value of linkCount multiplied by 5

This will yield a better overall SPR calculation, as the scouts' data will be measured purely against per-robot actions scores.

scoutradioz/what_is_spr.txt · Last modified: 2025/02/05 05:17 by moconnell@team102.org

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki