2

I am trying to understand how the compiler checks whether the position for a type parameter is covariant or contravariant.

As far as I know, if the type parameter is annotated with the +, which is the covariant annotation, then any method cannot have a input parameter typed with that class/trait's type parameter.

For example, bar cannot have a parameter of type T.

class Foo[+T] {
  def bar(param: T): Unit = 
    println("Hello foo bar")
}

Because the position for the parameter of bar() is considered to be negative, which means any type parameter in that position is in a contravariant position.

I am curious how the Scala compiler can find if every location in the class/trait is positive, negative, or neutral. It seems that there exist some rules like flipping its position in some condition but couldn't understand it clearly.

Also, if possible, I would like to know how these rules are defined. For example, it seems that parameters for methods defined in a class that has covariant annotation, like bar() method in Foo class, should have contravariant class type. Why?

Yuval Itzchakov
  • 141,979
  • 28
  • 246
  • 306
ruach
  • 1,279
  • 10
  • 18
  • Could you please clarify your question: 1. "It seems that there exist some rules like flipping its position in some condition" - what makes you think that? 2. "parameters for methods defined in a class that has covariant annotation... should have contravariant class type. Why?" - are you asking about why variance rules were introduced, or something else? By the way, in fact, method parameters are allowed to have either contravariant or non-variant types, not necessarily contravariant. – Ruslan Batdalov Mar 17 '18 at 09:21

1 Answers1

2

I am curious how the Scala compiler can find if every location in the class/trait is positive, negative, or neutral. It seems that there exist some rules like flipping its position in some condition but couldn't understand it clearly.

The Scala compiler has a phase called parser (like most compilers), which goes over the text and parses out tokens. One of these tokens is called variance. If we dive into the detail, there's a method called Parsers.typeParamClauseOpt which is responsible for parsing out the type parameter clause. The part relevant to your question is this:

def typeParam(ms: Modifiers): TypeDef = {
  var mods = ms | Flags.PARAM
  val start = in.offset
  if (owner.isTypeName && isIdent) {
    if (in.name == raw.PLUS) {
      in.nextToken()
      mods |= Flags.COVARIANT
    } else if (in.name == raw.MINUS) {
      in.nextToken()
      mods |= Flags.CONTRAVARIANT
    }
  }

The parser looks for the + and - signs in the type parameter signature, and creates a class called TypeDef which describes the type and states that it is covariant, contravariant or invariant.

Also, if possible, I would like to know how these rules are defined.

Variance rules are universal, and they stem from a branch of mathematics called Category Theory. More specifically, they're derived from Covariant and Contravariant Functors and the composition between the two. If you want to learn more on these rules, that would be the path I'd take.

Additionally, there is a class called Variance in the Scala compiler which looks like a helper class in regards to variance rules, if you want to take a deeper look.

Yuval Itzchakov
  • 141,979
  • 28
  • 246
  • 306
  • 1
    I really appreciate your explanation and editings! It helps me a lot to understand internals, and I learned lot from your answer. – ruach Mar 17 '18 at 10:18