from ChubakPDP11@programming.dev to rust@programming.dev on 14 May 2024 19:03
https://programming.dev/post/14093734
VOID WRONG
It seems like I misunderstood some stuff.
Over the years, up until Rust, there were 4 ways to have a ‘safe’ language:
- A virtual machine (e.g. most languages today) — whether it’s a high-level (scripting languages) one or low-level one (JVM, CLR)
- What C++ and similar languages do, destructors, compile-time bound checks, make a global heap and shit in it, make a local stack and piss in it, etc (basically my AllocPPx.pl but baked into the language)
- Bake the VM in with the machine code. That’s what D and Go do, and I think Nim does that too. This will make the point of your language being compiled entirely moot imo.
- Finally, the most ‘controversial’ decision for imperative-loving chuds: make a functional language. ML/Scheme/CLisp achieve memory safety through closures. Haskell achieves this through Monads. Functional languages have a property which allows them to be be both compiled into machine code and bytecode, and also, interpreted like an scripting language.
The problem with all these approaches is the trade-off between safety and speed. There’s another factor, and that is low-level access to the system. Languages like OCaml came close to achieving a balance, and that’s why Rust bassed itself on it.
Most imperative languages have ‘operational semantics’, but not ‘denotational semantics’. You can’t describe what C does with math. What C does depends on the platform, the CPU, etc.
Rust’s safety is achieved by ‘flattening out’ the memory model of a functional language like Ocaml. OCaml is a language with denotational semantics, because it’s a functional language. Rust is an imperative language but it has denotational semantics. At least when comes to memory management.
I am not going to even attempt to describe the denotational semantics of Rust because I am just an amatuer and i don’t have a master’s in LDT. But if someoen tries, they could.
I think people might have already done it. I am not sure.
If you tell me no, and Rust does not have denotational semantics, I stand by my great-great grandfather’s barber’s grave that yes, it does!
So why do I say Rust ‘flattens out’ the functional model for memory management? It does at least with lifetimes. So imagine this: lifetimes are just ‘let’ bindings, but with a different syntax.
OCaml:
let rec factorial = function | 0 -> 1 | n -> n * factorial (n - 1);;
Scheme
; This uses `let` under the hood (define (factorial n) (if (<= n 1) 1 (* n (factorial (- n 1)))))
So these two in Rust would be:
fn factorial<'a>(n: u32) -> u32 { match n { 0 => 1, _ => n * factorial(n - 1), } }
I know 'a
is almost useless here, but what I meant was, that 'a
makes it similar to the ‘let’ bindings in the prior to examples!
Semantics here is clear. Right?
But C:
int factorial(int n) { if (n == 0) return 1; else return n * factorial(n - 1); }
We do have operational semantics here, but who’s to say what are the denotational semantics? Right? What is a ‘function’ in C? Well most C compilers translate it to an Assembly subroutine, but what if our target does not support labels, or subroutines?
You see what I am arriving at?
Conclusion
Rust is a semi-functional, semi-imperative language, but the difference between Rust and other languages with functional aspects is: denotional semantics!
Note: A language having lambda closures does not make it ‘functional’, you can do that in GNU C too ffs! Denotational semantics make it functional.
threaded - newest
Can you explain what are denotational semantics and operational semantics? I tried reading the wikipedia article but I don’t get it.
Same here
I read it as “monads, closures, and other functional concepts are mathematical (denotational) concepts that can be proven memory-safe, while ‘functions’ (operational, as in C) are not”.
I’m not convinced whether this operational/denotational distinction would even be useful in practice. Operational semantics are part of a formal proof using logical statements rather than math objects, so the statement above wouldn’t even make sense.
If the result actually differs due to compilers deviating in different architectures, then what we can say is that the language/code is not as portable. But I don’t think this implies there’s no denotational semantics.
And if the end result doesn’t really differ (despite actually executing different instructions in different architectures) then… well, aren’t all compilers for all languages (including Rust) meant to use different instructions in different architectures as appropriate to give the same result?
Maybe I’m misunderstanding here, but my impression was that attempting to interpret the meaning of “what a function is in C” by looking at what instructions the compiler translates that to is more in line with an operational interpretation (you’d end up looking at sequential steps the machine executes one after the other), not a denotational one.
For a denotational interpretation of the meaning of that expression, shouldn’t you look at the inputs/outputs of the “factorial” operation to understand its mathematical meaning? The denotational semantics should be the same in all cases if they are all denotationally equivalent (ie. referentially transparent), even if they might not be operationally equivalent.
Quite a lot of misunderstanding here I think.
Not at all. Compiling to machine code rather than bytecode isn’t about avoiding having a runtime. Hell, C has a runtime. It’s called the C RunTime (CRT).
No they use garbage collection and bounds checking just like imperative languages. There are some slight differences but those are nitpicks.
Absolutely nothing to do with being functional or not. Dart is not functional and it supports AoT and JIT compilation.
You absolutely can describe what C does with maths. I understand it’s very hard but check out CBMC for example.
So does Rust to some extent, e.g.
usize
.I have no idea what you’re trying to say here. The lifetime is completely irrelevant here because you have no references.
The C specification says what a function is.
Not even remotely lol
First off, I apologize if I did not add a disclaimer saying I could be wrong. But given this, what is exactly the difference between denotaional and operational semantics? I base what I said on [my understanding of books about language theory. But it seems like I got the wrong gist. Where do you recommend I start?
PS: I’ll add a void to the post rn.
Honestly I have no idea. I started reading the Wikipedia page on denotational semantics and it doesn’t explain it very well but as far as I can see, denotational and operational semantics are just two different ways of describing the same thing. It almost seems to be just a philosophical difference, like frequentist Vs Bayesian.
It seems like reading the book is the only way. I shall do just that.
I don’t want to be mean, but this is quite a lot of misinformation. For the benefit of other readers: this post makes a large number of specific claims very confidently, and almost all of them are wrong.
It takes time to make sense of this subject and it’s normal to get things wrong while learning. I just don’t want other people to go away thinking that closures have something to do with memory safety, etc.
First, tell me what you mean by ‘closure’. I did not mean ‘closure’ as an operational function literal. Keep that in mind.
@Ferk @FizzyOrange @posgamer @reflectedodds @ExperimentalGuy: There are errors in what I said. I admit I am not an expert. I will come back with better explanations for my stance, after I have read more about it.