If Inheritance is so bad, why does everyone use it? (buttondown.email)
from armchair_progamer@programming.dev to programming_languages@programming.dev on 11 Apr 14:51
https://programming.dev/post/12646234

This essay says that inheritance is harmful and if possible you should “ban inheritance completely”. You see these arguments a lot, as well as things like “prefer composition to inheritance”. A lot of these arguments argue that in practice inheritance has problems. But they don’t preclude inheritance working in another context, maybe with a better language syntax. And it doesn’t explain why inheritance became so popular in the first place. I want to explore what’s fundamentally challenging about inheritance and why we all use it anyway.

#programming_languages

threaded - newest

onlinepersona@programming.dev on 11 Apr 16:23 next collapse

🙄 people who blindly say “inheritance is bad” or “composition over inheritance” and mean “no inheritance” are just like followers of religion. Always repeating the same stuff without thinking and being completely convinced they are right. Nigh everything has a place and a valid usecase and just because you don’t see it doesn’t mean nobody else does.

Edit: Also “sum types” and “algebraic data types” are horrible names. Pretty much the equivalent of “imaginary numbers”. What the fuck is a “sum type”? How do you “add” types together? Adding numbers makes sense, it has a real world equivalent. Two balls in a cup, add one ball and you have three balls in a cup. Add color to water and you have colored water. Simple. But types? The fuck?

str | int is a sum type --> does that mean sum types are adding two spaces of possibilities together aka a union of two sets? The wikipedia article is so friggin bad at explaining it to people who aren’t in the know.

Anti Commercial-AI license

NostraDavid@programming.dev on 11 Apr 17:49 next collapse

just like followers of religion

We say that shit, because we’ve touched code that’s deeply inherited, and it was a god-damn pain to work with, because changing a single line can mean you’ll need to update a fuckton more, which breaks tests all god-damn over, which means you may have to refactor 50% of the application in one go.

Anyway, everything has its uses (even goto). It’s just there are typically better alternatives.

“sum types” and “algebraic data types” are horrible names.

Agreed, but they exist due to historic reasons, and now we’re stuck with them. Not much we can do there ¯\_(ツ)_/¯

Pretty much the equivalent of “imaginary numbers”.

Terrible name that just means “vertical number line” (with an added operation where you rotate the vector, instead of add or scale), or “y-axis for the number line”. It’s funny because “Real” numbers are about as real as “Imaginary” numbers. Both are virtual (not physically existing).

str | int is a sum type

It just means that the variable can either be a str or an int. You’ve seen | used as “bitwise or”, right? Think in that direction.

PS: Stay away from Monads - they’ll give you an aneurysm. 😂

Kacarott@feddit.de on 12 Apr 11:21 collapse

In some langs like Python, | is also the “union” operator, to join sets and such, which I think is more directly related to types, since types are sets of possible values.

RonSijm@programming.dev on 11 Apr 20:13 next collapse

How do you “add” types together? Adding numbers makes sense, it has a real world equivalent. Two balls in a cup, add one ball and you have three balls in a cup. Add color to water and you have colored water. Simple. But types? The fuck?

It makes sense when using some fluent patterns and things like monads. For example:

User user = new User("Bob"); // User Class
UserWithPassword user = new User("Bob").WithPassword("Dylan123"); // UserWithPassword Type

A UserWithPassword type would then be a User object wrapper with some IWithPassword interface

Then you could create extension methods on IWithPassword objects and decorate those objects with password behavior

You can then have sort of polymorphic behavior by combining types together, and have different functionality available depending on which types you’ve added together

oessessnex@programming.dev on 11 Apr 21:54 next collapse

The sum and product types follow pretty much the same algebraic laws as natural numbers if you take isomorphism as equality.

Also class inheritance allows adding behaviour to existing classes, so it’s essentially a solution to the expression problem.

onlinepersona@programming.dev on 12 Apr 08:33 collapse

Yes, I know some of those words. Could you repeat that for those that aren’t mathematicians or in the know?

Anti Commercial-AI license

oessessnex@programming.dev on 12 Apr 14:26 collapse

As you already figured out the types are sets with a certain number of elements.

Two types are isomorphic if you can write a function that converts all elements of the first one into the elements of the second one and a function which does the reverse. You can then use this as the equality.

The types with the same number of elements are isomorphic, i.e True | False = Left | Right. For example, you can write a function that converts True to Left, False to Right, and a function that does the reverse.

Therefore you essentially only need types 0, 1, 2, 3, …, where type 0 has 0 elements, type 1 has 1 element, etc. and all others are isomorphic to one of these.

Let’s use (*) for the product and (+) for the sum, and letters for generic types. Then you can essentially manipulate types as natural numbers (the same laws hold, associativity, commutativity, identity elements, distributivity).

For example:

2 = 1 + 1 can be interpreted as Bool = True | False

2 * 1 = 2 can be interpreted as (Bool, Unit) = Bool

2 * x = x + x can be interpreted as (Bool, x) = This of x | That of x

o(x) = x + 1 can be interpreted as Option x = Some of x | None

l(x) = o(x * l(x)) = x * l(x) + 1 can be interpreted as List x = Option (x, List x)

l(x) = x * l(x) + 1 = x * (x * l(x) + 1) + 1 = x * x * l(x) + x + 1 = x * x * (l(x) + 1) + x + 1 = x * x * l(x) + x * x + x + 1 so a list is either empty, has 1 element or 2 elements, … (if you keep substituting)

For the expression problem, read this paper: doi:10.1007/BFb0019443

Kacarott@feddit.de on 12 Apr 07:56 next collapse

Saying “X is bad” or “Y over X” is not the same as saying “there is never a place for X”. I think JS is a pretty bad language, and prefer other languages to it, but I still recognise very obvious places where it should be used.

Maybe it depends on the way you understand types, but to me sum and product types are completely intuitive. A type is a set of possible values. A sum type is multiple sets added together (summed).

onlinepersona@programming.dev on 12 Apr 08:31 next collapse

A type is a set of possible values. A sum type is multiple sets added together (summed).

That makes sense for str | int, but how is an enum a “sum type”?

As for product types, in set theory a product of sets is a cartesian product. How is a

struct Dog {
  height: u8
  length: u8,
  name: String,
}

impl Dog {
  fn bark() {
    println!("woof!");
  }
}

a product? What is it a product of? And why is the type itself a product, not Dog x Cat? Or is Dog x Cat indeed some kind of product that I’m not aware of but with another syntax?

Anti Commercial-AI license

Kacarott@feddit.de on 12 Apr 11:11 collapse

Well what is an enum except a chain of X | Y | Z | …. An enum can be any of its variants, and therefore the set of its possible values are just all possibilities of its variants added together.

Consider this enum:

enum Foo {
  A,
  B(bool),
}

The possible values for A are just one: A. The possible values for B are B( true ) and B( false ). So the total possible values for Foo are simply these sets combined: A or B( true ) or B( false ).

As for product types, what it is the product is, is still the same: the sets of possible values. Consider the possible values for the product of A and B. For every possible value of A, a value could be made by matching it with any possible value of B (so, multiplication). If there are 3 possible values of A, and two possible values of B, then the total number of possible combinations of these for the product type is 6.

In your example, Dog is a product of u8, another u8, and String. If you decide to add a Boolean field to this type, accordingly the size of the set of options would double, because for every possible Dog you currently have, two possibilities would be created, one with a true and one with a false.

As for your last question, some languages might use x as a product type syntax, but because tuples and structs are inherently product types, most languages use those as Syntax. For example in Haskell the product type of Dog and Cat would be written as (Dog, Cat).

onlinepersona@programming.dev on 12 Apr 11:44 collapse

Saying “X is bad” or “Y over X” is not the same as saying “there is never a place for X”.

That rarely comes across online where opinions are often stated dichotomously. Especially when speaking of inheritance, some crowds (I’ve noticed this in the Rust crowd) are vehemently against it and will do nigh anything not to think of a problem as one of inheritance nor a solution that could benefit from inheritance. The makers of the servo browser engine which has to implement hierarchical structures (the DOM) ran against this exact issue within the Rust community where inheritance might as well equate to blasphemy.

I recognise that it’s probably a loud, zealous minority, but it makes interacting with the rest of the community rather difficult.

Anti Commercial-AI license

xigoi@lemmy.sdf.org on 12 Apr 11:18 collapse

Mathematically, the union of disjoint sets is often called the sum. This is a natural name because when you look at the number of elements (or in general, any measure), it will be the actual numeric sum.

onlinepersona@programming.dev on 12 Apr 11:33 collapse

Why the emphasis on “disjoint”? Aren’t integers a subset of floats? Would that mean then that int | float is incorrect?

Anti Commercial-AI license

xigoi@lemmy.sdf.org on 12 Apr 11:43 collapse

In most programming languages, integers are not considered a subset of floats, so when you have the type Int | Float, you can distinguish 3 from 3.0.

[deleted] on 11 Apr 16:23 next collapse

.

blackstampede@sh.itjust.works on 11 Apr 23:51 next collapse

Obviously, it’s not absolutely bad. I think the reason why it became very popular was because it allows you to reuse and extend functionality that already exists. This gives you a higher development velocity, which is obviously something that companies prioritize, and developers want to provide.

BatmanAoD@programming.dev on 12 Apr 05:05 collapse

Right, that’s the third meaning he cites for inheritance.

blackstampede@sh.itjust.works on 12 Apr 10:23 collapse

Was half asleep when I wrote this lol

porgamrer@programming.dev on 12 Apr 00:30 next collapse

It does frustrate me that people say “composition over inheritance” and then act like the problem is solved.

For a start, dependency injection is an abomination. Possibly the single worst programming paradigm in the history of software development.

The only codebases I’ve seen that deliver on some of the promises of composition have been entity-component systems, but this concept hasn’t even matured to the point of being a first-class language construct yet, so the programming style is pretty janky and verbose.

balder1993@programming.dev on 12 Apr 01:55 collapse

dependency injection is an abomination

I don’t think so, dependency injection has made testing easier in all static typed code bases I worked on.

FunctionalOpossum@programming.dev on 12 Apr 03:38 next collapse

The general idea is to not test things that need to be mocked, keep all logic out of those and keep them at the edges of the application. Then you test the remaining 90% of your code which would ideally be pure functions that don’t need mocks .

The remaining 5% can be integration tests which doesn’t care about DI.

In reality this doesn’t always work, because sometimes it’s better to write a complex query with logic and let your database do the processing.

okamiueru@lemmy.world on 12 Apr 05:18 collapse

How does DI make testing easier, or the lack of it make it harder?

Having a framework scan your code tree for magic annotation and configurations, etc, in order to instantiate “beans” or what not, is the worst imaginable tradeoff I’ve ever seen in software development. Calling it an abomination sounds exactly right.

BatmanAoD@programming.dev on 12 Apr 05:35 collapse

That quote from Meyer about contravariance not being “useful” is either a misreading of the cited source (the closest thing to a direct quote is about C++'s lack of variance) or, more charitably, a massive oversimplification. I don’t know Eiffel, but from Meyer’s speech, it sounds like Eiffel lets you choose between something like covariance and something like contravariance for each method. Unfortunately, the second link provided for context seems to be broken.