Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
What does "parameterised type in the positive / negative position" mean in the context of invariant functors?
From PureScript's Data.Functor.Invariant
documentation (emphasis mine):
A type of functor that can be used to adapt the type of a wrapped function where the parameterised type occurs in both the positive and negative position, for example,
F (a > a)
.
What do the terms "positive" and "negative position" refer to above?
^{.. and also wondering why F (a > a) requires an entirely new type of functor, but that should be another question probably..}
I tried to figure this out both by learning more about invariant functors and looking up the terms "positive", "negative" in connection with functional programming, but wasn't able to connect the dots yet. Is there a basic resource that I've missed? Or, at least, where should one start to understand these concepts? (Not even sure what the right topic is; see my research attempts below.)
From the search results for "invariant functor positive negative" , "positively invariant vs negatively invariant polarity" , "type theory positive negative polarity invariant" , and "what are invariant functors":

[
nLab
] polarity in type theoryMy intuition tells me this is the answer  but I have too little math / logic background to understand most of it (and to see how the parts that I do get connect to my question).

[
blog
] Polarity in Type TheorySame.

[
stackoverflow
] Example of Invariant Functor?The answer does mention the words "positive" and "negative", but it doesn't explain them, and neither do the linked resources. It does mention "HOAS",

[
google search
] higher order abstract syntax HOASbut it doesn't look like what I'm looking for.

[
hackage
]layers
package documentation > invariant functorsEven if it is relevant, I don't get it...

[
ploeh blog
] Invariant functors and Functors as invariant functorsNo mention of "positive" or "negative", but these posts did lead me to Mark Seemann's articles series Functors, applicatives, and friends that looks compelling and comprehensive, but so far I didn't find anything there either (or I missed it..)

[
nLab
] invariantUm, way above my head...
1 answer
"Invariant" isn't really the right word to search, though that's clearly not obvious. The idea is that we have covariant functors where if we have a > b
we can make f a > f b
, and contravariant functors where if we have a > b
we can make f b > f a
. You can see that the "functor" f a = a > a
is neither covariant nor contravariant. The term "invariant" (which isn't really used this way by the math community) is meant to indicate that it's "not co or contravariant". (This notion of "invariant functors" also doesn't really come up that often in my experience. Mathematicians would be more likely to talk about functors out of the core of the category which would give you something like imap
but only when the two provided functions are mutually inverse.) Another term, not widely used by the math community, is free variant for a functor that is both covariant and contravariant. The name comes from the fact a functor is usually free variant because it doesn't use the parameter at all. Chapter 2 of Hagino's A Categorical Programming Language gives a technical rundown of formulating these ideas in more generally and systematically.
So one way of defining "positive/negative position" is if we consider the functor that maps its argument to that position, is it covariant/contravariant? So, for example, for a > b
, as a functor of a
(with b
fixed) it's contravariant and as a functor of b
(with a
fixed). In practice, negative positions almost always directly or indirectly arise from being the domain of a function type.
One of the reasons for the "positive"/"negative" terminology is that a "negative of a negative is positive". Which is just to say that if you compose two contravariant functors the result is covariant, while if you compose two covariant functors the result is covariant. Of course, composing a covariant functor with a contravariant functor is contravariant. This is even clearer if you eliminate the notion of a contravariant functor, and instead replace them with (covariant) functors from the opposite category. In symbols, a contravariant functor from A to B is a covariant functor A^{op} → B, and we have (A^{op})^{op} = A. We also have for a functor F : A → B that it can also be viewed as a functor F^{op} : A^{op} → B^{op}.
One intuition is "positive" positions are positions where a value of the type "provides" a value, where "negative" positions are positions where a value of the type "accepts" or "receives" a value. This is especially clear for the function type. A function of type A > B
accepts a value of type A
and provides a value of type B
. Less obvious is a type like (A > B) > B
, where if the function wants to use its (function) argument it needs to provide a value of type A
. The negative of a negative is positive makes sense in this context because if I accept something that accepts a type A
, then I need to provide a type A
if I want to use it.
Just for completeness, there's also the notion of a strictly positive type. This is usually applied to recursive types and here the idea is that we can define a recursive type R as a fixed point of a type constructor as R = μ x. F x (where μ is the traditional binding form for computing the least fixed point). R is strictly positive if not only is F a covariant functor, but the expression defining F doesn't have positive occurrences of the argument due to being a negative of a negative.
Polarity is something else entirely.
0 comment threads