In my programming language I have some sort of "borrowing" too (although it's named differently). But my language has no dynamic typing, only static typing is used and thus all checks are compile-time and have no runtime cost. Why bothering with dynamic typing and paying runtime costs for it?
> The goal is that most of your code can have the assurances of static typing, but you can still opt in to dynamically-typed glue code to handle repls, live code reloading, runtime code generation, malleable software etc.
Dynamic typing is neat, I actually prefer it to static typing. Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
The standard complaint of pointless type errors that static type analysis would catch has nothing to do with weak typing, nor does the other one about unreliable listing of available ops in your editor by pressing `.` and looking at the autocomplete list. If you think the only thing people think is wrong about dynamic typing is JS `==` then you are swinging at a strawman from a decade ago.
The term unityped is used as well, and at typing level this also makes sense: you have one type called object, you put that object alongside the value object ("tag"), and then at runtime all operations on that object check if its type object provides the operation the code is trying to apply on it (or maybe each value object directly knows the operations it supports). I think I prefer this term.
"syntactic type" is a weird term to me, though. Is that in common use?
> The point of types is to prove the absence of errors
Maybe for you. Originally static typing was to make the job of the compiler easier. Dynamic typing was seen as a feature that allows for faster prototyping.
And no, dynamic typing does not mean untyped. It just means type errors are checked at runtime instead of compile time.
You can have strongly typed dynamic languages. Common Lisp is a very good example.
Weak typing is a design mistake. Dynamic typing has its place as it allows you to have types that are impossible to express in most static type systems while avoiding the bureaucratic overhead of having to prematurely declare your types.
The best languages allow for gradual typing. Prototype first then add types once the general shape of your program becomes clear.
Interestingly enough, I have never needed them there. Granted, I have written a few orders of magnitude less Haskell than I have C++. Still, the difference is worth interrogating (when I'm less sleep deprived).
Can someone help confirm whether I understand correctly the semantics difference between the final-line eval of
x^
vs.
x*
?
It seems like either one evaluates the contents of the `box`, and would only make a difference if you tried to use `x` afterwards? Essentially if you final-line eval `x^` and then decide you want to continue that snippet, you can't use `x` anymore because it's been moved. Awkwardly, it also hasn't been assigned so I'm not sure the box is accessible anymore?
In my programming language I have some sort of "borrowing" too (although it's named differently). But my language has no dynamic typing, only static typing is used and thus all checks are compile-time and have no runtime cost. Why bothering with dynamic typing and paying runtime costs for it?
> The goal is that most of your code can have the assurances of static typing, but you can still opt in to dynamically-typed glue code to handle repls, live code reloading, runtime code generation, malleable software etc.
Dynamic typing is neat, I actually prefer it to static typing. Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
The standard complaint of pointless type errors that static type analysis would catch has nothing to do with weak typing, nor does the other one about unreliable listing of available ops in your editor by pressing `.` and looking at the autocomplete list. If you think the only thing people think is wrong about dynamic typing is JS `==` then you are swinging at a strawman from a decade ago.
Yes to dynamic typing. Yes to static analysis.
What?
Technically, in a type theory context, there’s no such thing as “dynamic typing”. Types are a static, syntactic property of programs.
The correct term for languages that don’t have syntactic types is “untyped”.
> Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
All people who say things like this have never studied computer science.
The term unityped is used as well, and at typing level this also makes sense: you have one type called object, you put that object alongside the value object ("tag"), and then at runtime all operations on that object check if its type object provides the operation the code is trying to apply on it (or maybe each value object directly knows the operations it supports). I think I prefer this term.
"syntactic type" is a weird term to me, though. Is that in common use?
Dynamic typing is no typing.
The point of types is to prove the absence of errors. Dynamic typing just has these errors well-structured and early, but they're still errors.
> The point of types is to prove the absence of errors
Maybe for you. Originally static typing was to make the job of the compiler easier. Dynamic typing was seen as a feature that allows for faster prototyping.
And no, dynamic typing does not mean untyped. It just means type errors are checked at runtime instead of compile time.
You can have strongly typed dynamic languages. Common Lisp is a very good example.
Weak typing is a design mistake. Dynamic typing has its place as it allows you to have types that are impossible to express in most static type systems while avoiding the bureaucratic overhead of having to prematurely declare your types.
The best languages allow for gradual typing. Prototype first then add types once the general shape of your program becomes clear.
Errors that you can recover from. I simply appreciate the added flexibility. Have you ever tried making a container of arbitrary types in C++?
You cannot do anything meaningful with a container of arbitrary types, it's just bad design.
If you want to apply the same operation on all of them, then they share some API commonality -- therefore you can use polymorphism or type erasure.
If they don't, you still need to know what types they are -- therefore you can use `std::variant`.
If they really are unrelated, why are you storing them together in the same container? Even then, it's trivial in C++: `std::vector<std::any>`.
If C++ was the only static type system I'd experienced, I would also think it was a bad idea. Have you ever used an ML-family language?
Nope. Closest thing I have used was probably Haskell.
Haskell ought to be good enough. Did you struggle with making your containers there?
Interestingly enough, I have never needed them there. Granted, I have written a few orders of magnitude less Haskell than I have C++. Still, the difference is worth interrogating (when I'm less sleep deprived).
Can someone help confirm whether I understand correctly the semantics difference between the final-line eval of
vs. ?It seems like either one evaluates the contents of the `box`, and would only make a difference if you tried to use `x` afterwards? Essentially if you final-line eval `x^` and then decide you want to continue that snippet, you can't use `x` anymore because it's been moved. Awkwardly, it also hasn't been assigned so I'm not sure the box is accessible anymore?