I'm imagining your scatterplot has average score in a subtopic as the $x$ axis, and number of questions as the $y$ axis, with two different colors for observations of these two $y$ variables (e.g., black $=$ overall.count, red $=$ sub.count). If you want to visualize the bivariate relationships between score and your two number of questions variables, try Poisson regression. Here's an example in R:
library(ggplot2) #loads plotting package; use install.packages('ggplot2') if you haven't yet.
dataset=data.frame(score=rnorm(100,60,15), #some random continuous data: x̄ = 60, SD = 15
overall.count=rpois(100,lambda=35),sub.count=rpois(100,lambda=10)) #random counts: x̄ = 35/10
summary(glm(overall.count~score,family='poisson',data=dataset)) #1-Overall Poisson regression
summary(glm(sub.count~score,family='poisson',data=dataset)) #2-Subtopic Poisson regression
ggplot(dataset,aes(x=score,y=overall.count,z=sub.count))+ #calls scatterplot
geom_point(aes(x=score,y=overall.count))+geom_point(aes(x=score,y=sub.count),colour='red')+
geom_smooth(formula=y~x,method='glm',family='poisson',colour='black')+ #plots regression #1
geom_smooth(formula=z~x,method='glm',family='poisson',colour='red')+ #plots regression #2
scale_y_continuous('number of questions') #relabels y axis
Here's what that produces:

The regression lines are flat, which is no surprise, as these are random data. The grey spaces are confidence bands. Altogether, there's little doubt that the relationships are very weak here. Compare:
ggplot(data.frame(lapply(dataset,sort)),aes(x=score,y=overall.count,z=sub.count))+ #sorted
geom_point(aes(x=score,y=overall.count))+geom_point(aes(x=score,y=sub.count),colour='red')+
geom_smooth(formula=y~x,method='glm',family='poisson',colour='black')+ #plots regression #1
geom_smooth(formula=z~x,method='glm',family='poisson',colour='red')+ #plots regression #2
scale_y_continuous('number of questions') #relabels y axis
Here's what that produces:

Clearly more related, no?
This time there's a positive relationship by construction: I sorted the random data from low to high. You definitely don't want to do that – I only did for the sake of demonstration. In this case, you can see a little curve to the relationship too. If you want a numeric representation of that curvature, fit a quadratic (or even higher-degree polynomial) regression model. Here's code for a quadratic Poisson regression:
summary(glm(sub.count~scale(score,scale=F)+ #centered for nonessential multicollinearity
I(scale(score,scale=F)^2),family='poisson',data=dataset)) #adds the squared term to the model
Once again, I don't know if centering is really helpful in the above. It doesn't change the predictions, but it changes the linear coefficient, its $SE$, and its significance, and I'm not sure if the changes are good. Hopefully it won't matter for you, but I'll be happy to edit if anyone knows whether to center here. Also, if residuals are heteroscedastic (they look like they might be here, judging by a plot I haven't included), you may want robust standard errors (see "When to use robust standard errors in Poisson regression?"), or you may want negative binomial regression instead of Poisson.