Testing Database 2021

On this page, you can find the same Football Manager 2021 Editor Database and Save that we use for our automatic testing.

We openly share this database because we believe in our system, and also we want to empower tactic creators by giving them a voice. Being transparent is something we take very seriously. Tactic creators may download, use and modify both the save file, the database and the FMRTE filters. 

This database is a labour of love from various creators in the Football Manager community. The ideology behind the database is simple: create an even playing field for all tactics to ensure we can give a result that reflects the potential of a tactic. This database has been in use for the past three years.

These files are available only for personal use and they cannot be used commercially or on any website that displays advertisements.

Description

How Tactic Testing works

Our goals building the tactic system were making it 100% non-biased and reducing the randomness factor to the minimum physically possible.

How was this achieved?

Firstly, we built an entirely new database from scratch, basically an alternative reality with quite interesting features:

  • Players can't get injured
  • Players don't have mood swings
  • All staff has maxed attributes
  • Thousands of more changes..

You should be able to see the path at this point. In order to minimize randomness, we created a new universe in which all these factors that can have a significant impact on a team's performance are simply removed. Why? So the only factor impacting a team's performance is the tactic!

As judging a tactic based on the results in just one national league would be naive, we set up a super league with the best teams in the world, then added two teams:

  • Subtop - i.e. top 40% of the league (translates to top 8 teams in a 20-team league)
  • Underdog - i.e. bottom 40% of the league (translates to worst 8 teams in a 20-team league)

We use the editor to estimate these numbers and add players respectively to the teams that match the supposed quality level for each team, making sure every role is covered so that every tactic we test has access to the right type of players for that tactic.

Everything is set up! How are the tests run and guaranteed fairness for each tactic?

  • All tactics are tested in plug & play
  • Each test gets the same match schedule

Our focus is to maximise the accuracy of the test in proportion to the time needed. We found a sweet spot where each tactic gets tested as if it was used for more than 4 years in a standard championship. Actual football managers dream this would be possible! How was this achieved?

We automated the testing process to make sure there is no human bias, and that the assistant manager wouldn't interfere in-game.

How you should interpret Tactic Testing results

Now that you know how the testing works, you should have an understanding of the added value this brings to your experience. We proved the non-bias and the minimization of randomness, but this is actually surreal. While the testing will never be 100% accurate even in these conditions, it certainly allows to see which tactics are good - though it'd be a longshot to judge "the best". That's why we use a star-based ranking system.

It is fairly certain than a 5-star tactic is better than a 4-star tactic, but not necessarily better than a 4.5-star tactic. Actually, a team using a 3-star tactic might perform better than with "the best" tactic. Why?

Because ultimately, it's your job as a football manager to pick the right one! You should analyse your team and use the tactic you believe fits your squad!

How ratings are calculated

The rating system is based on a simple algorithm that assigns a 0 -> 10 score to a tactic. This gets translated in stars, where each point in the scale equals half-star.

75% of this star rating is based on the total points the tactics score in the test, while the other 25% adjusts it based on the overall goal difference.

Take the three main values you see when opening the Tactics page

  • Number of Matches (NoM)
  • Total Points (TP)
  • Goal Difference (GD)

Assign weight values

  • TP - 7.5
  • GD - 2.5

Assuming the worst tactic GD, let's say -100 for simplicity, we create a MIN value

  • MIN= 250 (inverse of -100 2.5 (= * GD weight value))

Assuming the best tactic GD, let's say 100 for simplicity, we can now compute

TOT = max(TP) 7.5 + max(GD) 2.5 + MIN

Finally, divide by x (e.g. 100) to get ratings over 100 and later use stars

  • Divide TOT by 100, then floor it (i.e. getting the largest integer that is smaller or equal to TOT)

How does a tactic earn max rating?

Each tactic gets assigned an overall rating out of 100.

  • Scoring perfect points - winning every game - gives 75 points
  • Scoring perfect goal difference - scoring at least 3 goals a game and not conceding more than 1 on average across the whole league - gives 25 points

As no tactic has ever scored 100 in the history of our testing (3+ years), we empirically define what's the minimum rating needed to get 5 stars.

  • Assuming >75 is needed for 5 stars
  • We assign one point on the 0 -> 10 scale based on each rating band you surpass
  • One point = half star

This should explain the current results you see.

As the rating system is the same for subtop and underdog, you will see many subtop scoring 5 stars (turns out you are pretty good at making a tactic for a strong team). Nevertheless, scoring 5 stars with an underdog is an incredibly harder achievement.

Basically, when using an underdog team in such a competitive environment without randomness, it is incredibly unlikely you produce such high ratings that we assign 5 stars to an underdog tactic!

The stars reflect the actual result the tactic brings in the test.

Take as an example the Leicester championship back in 2016. If we computed 5000 simulations of the Premier League in that year, only once Leicester would have won! That's 0.0002% - and it accounts for randomness, if we take it out of the equation as we strive to do, the odds are even smaller!