As already signalled, you can't do much -- without extra information or assumptions.
Here, without any strong claims, is a method from Mosteller, F. and Tukey, J.W. 1977. Data Analysis and Regression. Reading, MA: Addison-Wesley.
The idea is that your data on frequencies of grades define cumulative probabilities -- that is the only easy point -- and you can relate those cumulative probabilities to some reference distribution and then average over each interval to get a mean score as a numerical equivalent for each grade.
If we choose a logistic distribution as reference, then the calculation is easy, as equations feature only simple expressions in exponentials or logarithms. So for example grade 2 in year 1 covers the interval from cumulative probability 0.28 to 0.40.
Here are some token results:
+------------------------------------------+
| grade percent year score1 score2 |
|------------------------------------------|
| 1 28 1 -2.118 . |
| 2 12 1 -0.667 . |
| 3 10 1 -0.201 . |
| 4 15 1 0.305 . |
| 5 15 1 0.980 . |
| 6 10 1 1.753 . |
| 7 10 1 3.251 . |
|------------------------------------------|
| 1 25 2 . -2.249 |
| 2 25 2 . -0.523 |
| 3 10 2 . 0.201 |
| 4 20 2 . 0.863 |
| 5 20 2 . 2.502 |
+------------------------------------------+
The reservations are manifold and start with
There is no handle here on whether the underlying distributions are essentially the same or different, for which surveying people in both years may help to provide information.
Why logistic? Why not normal? Or any other distribution? The method would be a poor choice if attitudes were often polarised and followed a U-shaped distribution -- or if they followed an asymmetric distribution. But that should be evident from an exploratory analysis.
Pulling numeric scores out of a reference distribution like rabbits out of a hat does not make them more suitable as responses for (say) ordered logit modelling. But if these score variables were predictors, then results using them could be compared with the results of treating the scores as factor variables (i.e. as a set of indicator variables).
So, the idea is to let the pattern of frequencies indicate the scores: for example, if an extreme grade is rare, that implies a rather high or low numeric score.
If in doubt, try different procedures.
More mathematical detail if needed: We are relating cumulative probabilities to a logistic distribution with location parameter $0$ and scale parameter $1$. So for variable $x$ its density is
$$ \exp(-x)\ /\ [ 1 + \exp(-x)]^2$$ and its quantile function for cumulative probability $p$ is $\text{logit}
\ p$ or
$\ln\ [p/(1-p)]$. We want the mean between cumulative probability $A$ and $B$ or
$${1 \over B - A} \int_A^B \ln\ [p/(1 - p)]\ dp $$
which falls out easily (Mosteller and Tukey, 1977, p.245) as
$${1 \over B - A} \{ [B \ln B + (1 - B) \ln\ (1-B)]\ - \ [A \ln A + (1 - A) \ln\ (1 -A)]\}$$
which is quite programmable, a detail being to ensure that $0 \ln 0$ is returned as $0$, not some missing or undefined value. The calculus details are not spelled out by Mosteller and Tukey, but can be expanded upon.