benignintervention@lemmy.world
on 01 Jul 17:59
nextcollapse
I found math in physics to have this really fun duality of “these are rigorous rules that must be followed” and “if we make a set of edge case assumptions, we can fit the square peg in the round hole”
Also I will always treat the derivative operator as a fraction
MyTurtleSwimsUpsideDown@fedia.io
on 01 Jul 18:58
nextcollapse
i was in a math class once where a physics major treated a particular variable as one because at csmic scale the value of the variable basically doesn’t matter. the math professor both was and wasn’t amused
Lemmyoutofhere@lemmy.ca
on 01 Jul 22:11
nextcollapse
Engineer. 2+2=5+/-1
jaupsinluggies@feddit.uk
on 01 Jul 22:51
nextcollapse
Statistician: 1+1=sqrt(2)
InternetCitizen2@lemmy.world
on 02 Jul 04:19
nextcollapse
pi*pi = g
gandalf_der_12te@discuss.tchncs.de
on 02 Jul 13:43
collapse
rudyharrelson@lemmy.radio
on 01 Jul 19:05
nextcollapse
Derivatives started making more sense to me after I started learning their practical applications in physics class. d/dx was too abstract when learning it in precalc, but once physics introduced d/dt (change with respect to time t), it made derivative formulas feel more intuitive, like “velocity is the change in position with respect to time, which the derivative of position” and “acceleration is the change in velocity with respect to time, which is the derivative of velocity”
Lemmygradwontallowme@hexbear.net
on 01 Jul 19:37
nextcollapse
yea, essentially, to me, calculus is like the study of slope and a slope of everything slope, with displacement, velocity, acceleration.
I learned it the other way around since my physics teacher was speedrunning the math sections to get to the fun physics stuff and I really got it after hearing it the second time in math class.
But yeah: it often helps to have practical examples and it doesn’t get any more applicable to real life than d/dt.
exasperation@lemmy.dbzer0.com
on 02 Jul 18:10
collapse
I always needed practical examples, which is why it was helpful to learn physics alongside calculus my senior year in high school. Knowing where the physics equations came from was easier than just blindly memorizing the formulas.
The specific example of things clicking for me was understanding where the “1/2” came from in distance = 1/2 (acceleration)(time)^2 (the simpler case of initial velocity being 0).
And then later on, complex numbers didn’t make any sense to me until phase angles in AC circuits showed me a practical application, and vector calculus didn’t make sense to me until I had to actually work out practical applications of Maxwell’s equations.
iAvicenna@lemmy.world
on 01 Jul 20:07
nextcollapse
Look it is so simple, it just acts on an uncountably infinite dimensional vector space of differentiable functions.
gandalf_der_12te@discuss.tchncs.de
on 02 Jul 13:45
collapse
fun fact: the vector space of differentiable functions (at least on compact domains) is actually of countable dimension.
Doesn’t BCT imply that infinite dimensional Banach spaces cannot have a countable basis
gandalf_der_12te@discuss.tchncs.de
on 03 Jul 11:59
collapse
Uhm, yeah, but there’s two different definitions of basis iirc. And i’m using the analytical definition here; you’re talking about the linear algebra definition.
So I call an infinite dimensional vector space of countable/uncountable dimensions if it has a countable and uncountable basis. What is the analytical definition? Or do you mean basis in the sense of topology?
gandalf_der_12te@discuss.tchncs.de
on 03 Jul 14:07
collapse
Uhm, i remember there’s two definitions for basis.
The basis in linear algebra says that you can compose every vector v as a finite sum v = sum over i from 1 to N of a_i * v_i, where a_i are arbitrary coefficients
The basis in analysis says that you can compose every vector v as an infinite sum v = sum over i from 1 to infinity of a_i * v_i. So that makes a convergent series. It requires that a topology is defined on the vector space fist, so convergence becomes well-defined. We call such a vector space of countably infinite dimension if such a basis (v_1, v_2, …) exists that every vector v can be represented as a convergent series.
gandalf_der_12te@discuss.tchncs.de
on 03 Jul 14:11
nextcollapse
i just checked and there’s official names for it:
the term Hamel basis refers to basis in linear algebra
the term Schauder basis is used to refer to the basis in analysis sense.
Ah that makes sense, regular definition of basis is not much of use in infinite dimension anyways as far as I recall. Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
gandalf_der_12te@discuss.tchncs.de
on 04 Jul 05:43
collapse
regular definition of basis is not much of use in infinite dimension anyways as far as I recall.
yeah, that’s exactly why we have an alternative definition for that :D
Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
Differentiability is not required; what is required is a topology, i.e. a definition of convergence to make sure the infinite series are well-defined.
And it denotes an operation that gives you that fraction in operational algebra…
Instead of making it clear that d is an operator, not a value, and thus the entire thing becomes an operator, physicists keep claiming that there’s no fraction involved. I guess they like confusing people.
It doesn’t. Only sometimes it does, because it can be seen as an operator involving a limit of a fraction and sometimes you can commute the limit when the expression is sufficiently regular
This very nice Romanian lady that taught me complex plane calculus made sure to emphasize that e^j*theta was just a notation.
Then proceeded to just use it as if it was actually eulers number to the j arg. And I still don’t understand why and under what cases I can’t just assume it’s the actual thing.
zea_64@lemmy.blahaj.zone
on 02 Jul 15:29
nextcollapse
I’ve seen e^{d/dx}
frezik@lemmy.blahaj.zone
on 02 Jul 17:01
nextcollapse
Let’s face it: Calculus notation is a mess. We have three different ways to notate a derivative, and they all suck.
JackbyDev@programming.dev
on 03 Jul 16:40
collapse
Calculus was the only class I failed in college. It was one of those massive 200 student classes. The teacher had a thick accent and hand writing that was difficult to read. Also, I remember her using phrases like “iff” that at the time I thought was her misspelling something only to later realize it was short hand for “if and only if”, so I can’t imagine how many other things just blew over my head.
I retook it in a much smaller class and had a much better time.
It is just a definition, but it’s the only definition of the complex exponential function which is well behaved and is equal to the real variable function on the real line.
Also, every identity about analytical functions on the real line also holds for the respective complex function (excluding things that require ordering). They should have probably explained it.
e^𝘪θ^ is not just notation. You can graph the entire function e^x+𝘪θ^ across the whole complex domain and find that it matches up smoothly with both the version restricted to the real axis (e^x^) and the imaginary axis (e^𝘪θ^). The complete version is:
e^x+𝘪θ^ := e^x^(cos(θ) + 𝘪sin(θ))
Various proofs of this can be found on wikipeda. Since these proofs just use basic calculus, this means we didn’t need to invent any new notation along the way.
We teach kids the derive operator being ’ or ·. Then we switch to that writing which makes sense when you can use it properly enough it behaves like a fraction
The thing is that it’s legit a fraction and d/dx actually explains what’s going on under the hood. People interact with it as an operator because it’s mostly looking up common derivatives and using the properties.
Take for example ∫f(x) dx to mean "the sum (∫) of supersmall sections of x (dx) multiplied by the value of x at that point ( f(x) ). This is why there’s dx at the end of all integrals.
The same way you can say that the slope at x is tiny f(x) divided by tiny x or d*f(x) / dx or more traditionally (d/dx) * f(x).
It’s a fraction of two infinitesimals. Infinitesimals aren’t numbers, however, they have their own algebra and can be manipulated algebraically. It so happens that a fraction of two infinitesimals behaves as a derivative.
Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…
Why would you assume I don’t have the context? I have a degree in math. I could be wrong about this, I’m open-minded. By all means, please explain how infinitesimals don’t have a consistent algebra.
I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.
I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.
If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.
Daft_ish@lemmy.dbzer0.com
on 03 Jul 14:17
nextcollapse
1/2 <-- not a number. Two numbers and an operator. But also a number.
In Comp-Sci, operators mean stuff like >>, *, /, + and so on. But in math, an operator is a (possibly symbollic) function, such as a derivative or matrix.
Daft_ish@lemmy.dbzer0.com
on 03 Jul 16:37
collapse
Youre not wrong, distinctively, but even in mathematics “/” is considered an operator.
The world has finite precision. dx isn't a limit towards zero, it is a limit towards the smallest numerical non-zero. For physics, that's Planck, for engineers it's the least significant bit/figure. All of calculus can be generalized to arbitrary precision, and it's called discrete math. So not even mathematicians agree on this topic.
But df/dx is a fraction, is a ratio between differential of f and standard differential of x. They both live in the tangent space TR, which is isomorphic to R.
What’s not fraction is \partial f / \partial x, but likely you already know that. This is akin to how you cannot divide two vectors.
threaded - newest
I found math in physics to have this really fun duality of “these are rigorous rules that must be followed” and “if we make a set of edge case assumptions, we can fit the square peg in the round hole”
Also I will always treat the derivative operator as a fraction
2+2 = 5
…for sufficiently large values of 2
i was in a math class once where a physics major treated a particular variable as one because at csmic scale the value of the variable basically doesn’t matter. the math professor both was and wasn’t amused
Engineer. 2+2=5+/-1
Statistician: 1+1=sqrt(2)
pi*pi = g
units don’t match, though
Computer science: 2+2=4 (for integers at least; try this with floating point numbers at your own peril, you absolute fool)
comparing floats for exact equality should be illegal, IMO
0.1 + 0.2 = 0.30000000000000004
Freshmen engineer: wow floating point numbers are great.
Senior engineer: actually the distribution of floating point errors is mindfuck.
Professional engineer: the mean error for all pairwaise 64 bit floating point operations is smaller than the Planck constant.
I mean as an engineer, this should actually be 2+2=4 +/-1.
Found the engineer
is this how Brian Greene was born?
I always chafed at that.
“Here are these rigid rules you must use and follow.”
“How did we get these rules?”
“By ignoring others.”
Little dicky? Dick Feynman?
It’s not even a fraction, you can just cancel out the two "d"s
"d"s nuts lmao
Except you can kinda treat it as a fraction when dealing with differential equations
Oh god this comment just gave me ptsd
Only for separable equations
And discrete math.
Derivatives started making more sense to me after I started learning their practical applications in physics class.
d/dx
was too abstract when learning it in precalc, but once physics introducedd/dt
(change with respect to time t), it made derivative formulas feel more intuitive, like “velocity is the change in position with respect to time, which the derivative of position” and “acceleration is the change in velocity with respect to time, which is the derivative of velocity”yea, essentially, to me, calculus is like the study of slope and a slope of everything slope, with displacement, velocity, acceleration.
Possibly you just had to hear it more than once.
I learned it the other way around since my physics teacher was speedrunning the math sections to get to the fun physics stuff and I really got it after hearing it the second time in math class.
But yeah: it often helps to have practical examples and it doesn’t get any more applicable to real life than d/dt.
I always needed practical examples, which is why it was helpful to learn physics alongside calculus my senior year in high school. Knowing where the physics equations came from was easier than just blindly memorizing the formulas.
The specific example of things clicking for me was understanding where the “1/2” came from in distance = 1/2 (acceleration)(time)^2 (the simpler case of initial velocity being 0).
And then later on, complex numbers didn’t make any sense to me until phase angles in AC circuits showed me a practical application, and vector calculus didn’t make sense to me until I had to actually work out practical applications of Maxwell’s equations.
Look it is so simple, it just acts on an uncountably infinite dimensional vector space of differentiable functions.
fun fact: the vector space of differentiable functions (at least on compact domains) is actually of countable dimension.
still infinite though
Doesn’t BCT imply that infinite dimensional Banach spaces cannot have a countable basis
Uhm, yeah, but there’s two different definitions of basis iirc. And i’m using the analytical definition here; you’re talking about the linear algebra definition.
So I call an infinite dimensional vector space of countable/uncountable dimensions if it has a countable and uncountable basis. What is the analytical definition? Or do you mean basis in the sense of topology?
Uhm, i remember there’s two definitions for basis.
The basis in linear algebra says that you can compose every vector v as a finite sum v = sum over i from 1 to N of a_i * v_i, where a_i are arbitrary coefficients
The basis in analysis says that you can compose every vector v as an infinite sum v = sum over i from 1 to infinity of a_i * v_i. So that makes a convergent series. It requires that a topology is defined on the vector space fist, so convergence becomes well-defined. We call such a vector space of countably infinite dimension if such a basis (v_1, v_2, …) exists that every vector v can be represented as a convergent series.
i just checked and there’s official names for it:
Ah that makes sense, regular definition of basis is not much of use in infinite dimension anyways as far as I recall. Wonder if differentiability is required for what you said since polynomials on compact domains (probably required for uniform convergence or sth) would also work for cont functions I think.
yeah, that’s exactly why we have an alternative definition for that :D
Differentiability is not required; what is required is a topology, i.e. a definition of convergence to make sure the infinite series are well-defined.
When a mathematician want to scare an physicist he only need to speak about ∞
When a physicist want to impress a mathematician he explains how he tames infinities with renormalization.
Only the sith deal in ∞
…and Buzz Lightyear
Mathematicians will in one breath tell you they aren’t fractions, then in the next tell you dz/dx = dz/dy * dy/dx
Brah, chain rule & function composition.
Also multiplying by dx in diffeqs
vietnam flashbacks meme
Have you seen a mathematician claim that? Because there’s entire algebra they created just so it becomes a fraction.
(d/dx)(x) = 1 = dx/dx
This is until you do multivariate functions. Then you get for f(x(t), y(t)) this: df/dt = df/dx * dx/dt + df/dy * dy/dt
Not very good mathematicians if they tell you they aren’t fractions.
Is that Phill Swift from flex tape ?
De dix, boss! De dix!
It was a fraction in Leibniz’s original notation.
And it denotes an operation that gives you that fraction in operational algebra…
Instead of making it clear that
d
is an operator, not a value, and thus the entire thing becomes an operator, physicists keep claiming that there’s no fraction involved. I guess they like confusing people.Chicken thinking: “Someone please explain this guy how we solve the Schroëdinger equation”
Division is an operator
Why does using it as a fraction work just fine then? Checkmate, Maths!
It doesn’t. Only sometimes it does, because it can be seen as an operator involving a limit of a fraction and sometimes you can commute the limit when the expression is sufficiently regular
Added clarifying sentence I speak from a physicists point of view.
What is Phil Swift going to do with that chicken?
The will repair it with flex seal of course
To demonstrate the power of flex seal, I SAWED THIS CHICKEN IN HALF!
This very nice Romanian lady that taught me complex plane calculus made sure to emphasize that e^j*theta was just a notation.
Then proceeded to just use it as if it was actually eulers number to the j arg. And I still don’t understand why and under what cases I can’t just assume it’s the actual thing.
I’ve seen e^{d/dx}
Let’s face it: Calculus notation is a mess. We have three different ways to notate a derivative, and they all suck.
Calculus was the only class I failed in college. It was one of those massive 200 student classes. The teacher had a thick accent and hand writing that was difficult to read. Also, I remember her using phrases like “iff” that at the time I thought was her misspelling something only to later realize it was short hand for “if and only if”, so I can’t imagine how many other things just blew over my head.
I retook it in a much smaller class and had a much better time.
It is just a definition, but it’s the only definition of the complex exponential function which is well behaved and is equal to the real variable function on the real line.
Also, every identity about analytical functions on the real line also holds for the respective complex function (excluding things that require ordering). They should have probably explained it.
She did. She spent a whole class on about the fundamental theorem of algebra I believe? I was distracted though.
It legitimately IS exponentiation. Romanian lady was wrong.
e^𝘪θ^ is not just notation. You can graph the entire function e^x+𝘪θ^ across the whole complex domain and find that it matches up smoothly with both the version restricted to the real axis (e^x^) and the imaginary axis (e^𝘪θ^). The complete version is:
e^x+𝘪θ^ := e^x^(cos(θ) + 𝘪sin(θ))
Various proofs of this can be found on wikipeda. Since these proofs just use basic calculus, this means we didn’t need to invent any new notation along the way.
I’m aware of that identity. There’s a good chance I misunderstood what she said about it being just a notation.
It’s not simply notation, since you can prove the identity from base principles. An alien species would be able to discover this independently.
clearly, d/dx simplifies to 1/x
I still don’t know how I made it through those math curses at uni.
Calling them ‘curses’ is apt
Having studied physics myself I’m sure physicists know what a derivative looks like.
.
If not fraction, why fraction shaped?
We teach kids the derive operator being
’
or·
. Then we switch to that writing which makes sense when you can use it properly enough it behaves like a fractionThe thing is that it’s legit a fraction and d/dx actually explains what’s going on under the hood. People interact with it as an operator because it’s mostly looking up common derivatives and using the properties.
Take for example
∫f(x) dx
to mean "the sum (∫) of supersmall sections of x (dx) multiplied by the value of x at that point ( f(x) ). This is why there’s dx at the end of all integrals.The same way you can say that the slope at x is tiny f(x) divided by tiny x or
d*f(x) / dx
or more traditionally(d/dx) * f(x)
.The other thing is that it’s legit not a fraction.
it’s legit a fraction, just the numerator and denominator aren’t numbers.
No 👍
try this on – Yes 👎
It’s a fraction of two infinitesimals. Infinitesimals aren’t numbers, however, they have their own algebra and can be manipulated algebraically. It so happens that a fraction of two infinitesimals behaves as a derivative.
Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…
Why would you assume I don’t have the context? I have a degree in math. I could be wrong about this, I’m open-minded. By all means, please explain how infinitesimals don’t have a consistent algebra.
I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.
I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.
If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.
1/2 <-- not a number. Two numbers and an operator. But also a number.
In Comp-Sci, operators mean stuff like
>>
,*
,/
,+
and so on. But in math, an operator is a (possibly symbollic) function, such as a derivative or matrix.Youre not wrong, distinctively, but even in mathematics “/” is considered an operator.
en.m.wikipedia.org/wiki/Operation_(mathematics)
<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/3b1247d0-ca8e-4241-94b7-7509bf33ccb0.webp">
oh huh, neat. Always though of those as “operations.”
Software engineer: 🫦
The world has finite precision. dx isn't a limit towards zero, it is a limit towards the smallest numerical non-zero. For physics, that's Planck, for engineers it's the least significant bit/figure. All of calculus can be generalized to arbitrary precision, and it's called discrete math. So not even mathematicians agree on this topic.
Headache for mathematicians
youtube.com/shorts/WSFkDNXOpMk
But df/dx is a fraction, is a ratio between differential of f and standard differential of x. They both live in the tangent space TR, which is isomorphic to R.
What’s not fraction is \partial f / \partial x, but likely you already know that. This is akin to how you cannot divide two vectors.