• 0 Posts
  • 68 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • It sounds like you’re asking why local hidden variables can’t explain the experimental results? But a huge part of the video is spent explaining this so I’m assuming that isn’t what you mean. So I’m not sure what it is exactly that you’re asking. Could you elaborate on how what you’re suggesting differs from the local hidden variable explanation?

    ETA: I think you are asking about hidden variables but maybe don’t realize it because it was brushed over in the video. When she’s discussing the possible strategies for how the particles would decide their orientation, she says there are only 2 strategies that work. Your strategy is one that doesn’t and here’s why I think that is.

    Say your electron is created with 0 degree spin. When deflected with a 0 degree detector, the electron goes up and the positron goes down 100% of the time. Great. But what about the 120 degree detector? Well the electron goes up 3/4 of the time and down 1/4. The positron goes up 1/4 and goes down 3/4. But this can’t be. If the electron goes up, the positron must go down. So in order for it to work, they’d need to pick one of the strategies she talks about in the video. They need to agree on how they’d respond to each of the orientations separately, rather than just agree on a spin direction at creation.


  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzLook at this. Or don't.
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    10 days ago

    I don’t think it’s wrong, just simplified. You don’t really have to touch the photon, just affect the wave function, the statistical description of the photon’s movement through space and time. Detectors and polarizers, anything that can be used to tell exactly which path the photon took through the slits will do this. Quantum eraser experiments just show that you can “undo the damage” to the wave function, so to speak. You can get the wave function back into an unaltered state but by doing so you lose the which-way information.


  • Yes, but this is assuming an objective, universal frame of reference, and that’s not really a thing.

    Not really. Nothing I said has any dependence on a universal clock.

    It’s true that there could be some alien halfway across the observable universe that could observe the stars that have exited our observable universe. But, we could not observe the alien observing them, because information still can’t travel faster than the speed of light.

    Right and this is my point. Any philosophical theory that has anything to do with the observable universe is inherently self-centered. Not even Earth centered. Not even conscious-being centered. Literally self-centered. The observable universe is subjective. And so that puts it in the class of philosophies that insist that the universe arises from your own consciousness.

    Which is not to invalidate it, but it’s not objective, and it has nothing to do with science.







  • VoterFrog@lemmy.worldtoBuy European@feddit.ukImperial Wastes So Much Time
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    3 months ago

    But, when I do that, what I will actually be converting isn’t length to length. I’ll be figuring out how many sleepers per km, how many rail segments per km, how many buckets of spikes per km. None of those will be simple metric unit conversions.

    This is actually the primary strength of imperial and the impetus behind most of its conversion ratios. Base 10 is just terrible for being divided. But if you have a mile of railroad, you can place your rail and stakes regularly at almost any foot-length and come out even.




  • Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.

    One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

    So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.

    First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

    Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

    All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.


  • As far as I understand as a layman, the measurement tool doesn’t really matter. Any observer needs to interact with the photon in order to observe it and so even the best experiment will always cause this kind of behavior.

    With no observer: the photon, acting as a wave, passes through both slits simultaneously and on the other side of the divider, starts to interfere with itself. Where the peaks or troughs of the wave combine is where the photon is most likely to hit the screen in the back. In order to actually see this interference pattern we need to send multiple photons through. Each photon essentially lands in a random location and the pattern only reveals itself as we repeat the experiment. This is important for the next part…

    With an observer: the photon still passes through both slits. However, the interaction with the observer’s wave function causes the part of the photon’s wave in that slit to offset in phase. In other words, the peaks and troughs are no longer in the same place. So now the interference pattern that the photon wave forms with itself still exists but, critically, it looks completely different.

    Now we repeat with more photons. BUT each time you send a photon through it comes out with a different phase offset. Why? Because the outcome of the interaction with the observer is governed by quantum randommess. So every photon winds up with a different interference pattern which means that there’s no consistency in where they wind up on the screen. It just looks like random noise.

    At least that’s what I recall from an episode of PBS Space Time.


  • VoterFrog@lemmy.worldtoScience Memes@mander.xyzOn Black Holes...
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 months ago

    Unfortunately the horrible death would come long before you even reach the event horizon. The tidal forces would tear you apart and eventually, tear apart the molecules that used to make up you. Every depiction of crossing a black hole event horizon just pretends that doesn’t happen for the sake of demonstration.




  • I don’t think it’s working. LLMs don’t have any trouble parsing it.

    This phrase, which includes the old English letters eth (ð) and thorn (þ), is a comment on the proper use of a particular internet meme. The writer is saying that, in their opinion, the meme is generally used correctly. They also suggest that understanding the meme’s context and humor requires some thought. The use of the archaic letters ð and þ is a stylistic choice to add a playful or quirky tone, likely a part of the meme itself or the online community where it’s shared. Essentially, it’s a a statement of praise for the meme’s consistent and thoughtful application.


  • The problem with these long term journeys is that it’s entirely possible that the probe could get half way there by time we develop the technology to make another probe that’s twice as fast and cheaper. Or maybe we make other discoveries and find that we don’t need the data the probe is equipped to gather. We’re not really near the limits of propulsion and space engineering yet so it doesn’t make a ton of sense to invest in something with such a distant payoff when it’s somewhat likely to be outdone before then.