I was just curious if you were programming that completely yourself or also with the help of Chat GPT?
Quote:
Prompt Attempt a derivation from quantum information,... | Try in Copilot Chat
Attempt a derivation from quantum information, then from statistical mechanics, and then from tensor network theory.
Response
Excellent! Let's proceed step by step to attempt a derivation of the S Theory growth law and constraint-based dynamics from three foundational perspectives:
1. From Quantum Information Theory
Goal:
Derive the entanglement growth law: $ \frac{dS{ij}}{d\tau} = \eta{ij}(S{\text{max}} - S{ij}) $ as a coarse-grained description of entanglement dynamics between two subsystems ( i ) and ( j ).
Assumptions:Derivation Sketch:
- Each "spatom" is a quantum subsystem (qubit or qudit).
- The system evolves under local unitary dynamics or open system decoherence.
- Entanglement entropy ( S{ij} ) increases due to interaction or decoherence, approaching a maximum value ( S{\text{max}} ) (e.g., the Page limit).
[ol]Entanglement Entropy Growth: In open quantum systems, the von Neumann entropy of a subsystem increases due to interaction with an environment: $ \frac{dS}{dt} \geq 0 $ and saturates at thermal equilibrium or maximal entanglement. Lindblad Dynamics: For a density matrix ( \rho ), the Lindblad equation governs non-unitary evolution: $ \frac{d\rho}{dt} = -i[H, \rho] + \sumk \left( Lk \rho Lk^\dagger - \frac{1}{2} {Lk^\dagger L_k, \rho} \right) $ This leads to entropy production and decoherence. Phenomenological Model: Assume the entanglement entropy between ( i ) and ( j ) grows toward a saturation value ( S{\text{max}} ), with a rate proportional to the remaining "headroom": $ \frac{dS{ij}}{d\tau} \propto S{\text{max}} - S{ij} $ This is a first-order relaxation process, common in thermodynamics and information theory. Interpretation of ( \eta{ij} ): The coefficient ( \eta{ij} ) encodes the interaction strength or entangling power between the two subsystems. [/ol] Conclusion: The growth law is a coarse-grained, thermodynamic approximation of entanglement dynamics under decoherence or local interaction, consistent with quantum information theory.
2. From Statistical Mechanics
Goal:
Derive the same growth law from entropy maximization and relaxation dynamics.
Assumptions:Derivation Sketch:
- The entanglement network is a statistical ensemble of configurations.
- Each link ( S_{ij} ) represents a microstate variable.
- The system evolves toward maximum entropy under constraints.
[ol]Entropy Functional: Define a global entropy functional: $ \mathcal{S} = -\sum{ij} p{ij} \log p{ij} $ where ( p{ij} ) is the probability of a given entanglement configuration. Relaxation Dynamics: Use Onsager's reciprocal relations or FokkerPlanck dynamics: $ \frac{dS{ij}}{d\tau} = -\frac{\partial \mathcal{F}}{\partial S{ij}} $ where ( \mathcal{F} ) is a free energy-like functional: $ \mathcal{F} = \mathcal{E} - T \mathcal{S} $ Assume Quadratic Energy Penalty: Let: $ \mathcal{E} = \sum{ij} \frac{1}{2} \eta{ij} (S{\text{max}} - S{ij})^2 $ Then: $ \frac{dS{ij}}{d\tau} = \eta{ij}(S{\text{max}} - S{ij}) $ [/ol] Conclusion: The growth law emerges as a gradient descent on a free energy landscape, where entanglement increases to maximize entropy and minimize informational tension.
3. From Tensor Network Theory
Goal:
Interpret the growth law and constraint terms as arising from tensor network optimization.
Assumptions:Derivation Sketch:
- The entanglement network is a tensor network (e.g., MERA, PEPS).
- Each link ( S_{ij} ) corresponds to a bond dimension or entanglement weight.
- The network evolves to minimize a cost function under topological constraints.
[ol]Tensor Network Cost Function: Define a cost function for optimizing the network: $ \mathcal{C} = \sum{ij} \left[ \eta{ij}(S{\text{max}} - S{ij})^2 + \lambda \cdot \text{Constraint}(S_{ij}) \right] $ Gradient Descent Dynamics: The evolution of ( S{ij} ) follows: $ \frac{dS{ij}}{d\tau} = -\frac{\partial \mathcal{C}}{\partial S{ij}} = \eta{ij}(S{\text{max}} - S{ij}) - \lambda \frac{\partial \text{Constraint}}{\partial S_{ij}} $ Constraint Terms: [/ol] Conclusion: The growth law and constraint dynamics can be derived from variational optimization of a tensor network, where entanglement structure evolves to minimize a global cost under geometric and topological constraints.
- Frustration: penalizes irregular entanglement flow.
- Asymmetry: encodes directional bias (e.g., spin).
- Loop closure: rewards topological consistency (e.g., particle stability).
Would you like me to formalize these derivations into a document or visual summary?
I'm too dumb to know any difference.lb3 said:
I've asked AI to derive the terms from first principles as well but because most of the S-theory terms are related to each other, the derivations just walk from one term to the next and back to the first term. It's not actually deriving anything except proving that S-theory is internally consistent which I suppose has its own value.
Are you a physicist or just a really smart guy? Or both?Philip J Fry said:
I haven used co pilot that much, but a concern I always have is that it's just regurgitating what it's reading instead of actually putting thought into it.
I'll be updating the white paper this weekend to touch up some wording and show that I can get a 0.0kg spin 1 SEM to propagate at exactly 1 link per tick.
JJxvi said:
I was just curious if you were programming that completely yourself or also with the help of Chat GPT?
You might find this interesting:Philip J Fry said:
I've definitely felt more at ease talking in entropic time than local time from a conceptual standpoint. Not sure if I'm making that term up, but it seems natural to have a master clock.
Cool I might check it out later.PeekingDuck said:
The video explains that they misunderstand the paradox and then goes on to talk about time a bit.
This is a great example of what you can do with AI. You don't have to take it at its word. You can correct it and lead it in the right direction. Right now, anyway, it isn't perfect, but it's a fantastic tool when used properly, which you are clearly doing. Amazing work!Philip J Fry said:JJxvi said:
I was just curious if you were programming that completely yourself or also with the help of Chat GPT?
I guess I'll direct this. What exactly are you getting at? Whatever you think the short comings are about LLM, you're going to have to let that go. That I used an openly available tool that anyone can use to develop a math model? Guilty as charged. Do I take whatever code it gives me without checking it and putting it through the ringer? No, I do not do that. ChatGPT gets a lot of stuff right, but it can hallucinate and sometimes straight up lie to you. I've lost count of the number of times it would show me data and say "There it is, the creation of the universe!" only to show a straight line. Don't you think that didn't weigh on me heavily throughout this whole process? I fought through that nonsense daily.
I'm no Einstein and I don't pretend to be. I didn't get a phd in astrophysics and quantum mechanics and slave like a lot of really smart people on this stuff. They are the ones who should be taking this work and running with it. Not me.
If this theory holds, which I actually think it will, there's going to be an explosion of entanglement based studies out there very soon.
lb3 said:
You need to work on red shifts for entanglements traveling from high to low density regions. Then do Doppler shifts.
You don't need cosmological scales, at least not right now.Philip J Fry said:
Yeah. Doppler effect is my next phase, but I think before I do that, I need to get my photon to behave. Right now, if I add a proximity constraint, I get exactly 1 link per tick. I'm trying to decide if that's cheating though because it feels like it is.
I was also struggling a little conceptually with the scale. You think it's safe to say I need cosmological scales? Basically shoot a photon from one large moving mass to the other?