• 0 Posts
  • 172 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle

  • It wasn’t the buffer itself that drew power. It was the need to physically spin the disc faster in order to read the data to build up a buffer. So it would draw more power even if you left it physically stable. And then, if it would actually skip in reading, it would need to seek back to where it was to build up the buffer again.




  • They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.

    Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.

    Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.

    So what do you mean by 48% accurate? That’s not enough information to do anything with.


  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.




  • taking a shot at installing a new OS

    To be clear, I had been on Ubuntu for about 4 years by then, having switched when 6.06 LTS had come out. And several years before that, I had previously installed Windows Me, XP beta, and the first official XP release on a home-built, my first computer that was actually mine, using student loan money paid out because my degree program required all students have their own computer.

    But freedom to tinker on software was by no means the flexibility to acquire spare hardware. Computers were really expensive in the 90’s and still pretty expensive in the 2000’s. Especially laptops, in a time when color LCD technology was still pretty new.

    That’s why I assumed you were a different age from me, either old enough to have been tinkering with computers long enough to have spare parts, or young enough to still live with middle class parents who had computers and Internet at home.


  • That’s never really been true. It’s a cat and mouse game.

    If Google actually used its 2015 or 2005 algorithms as written, but on a 2025 index of webpages, that ranking system would be dogshit because the spammers have already figured out how to crowd out the actual quality pages with their own manipulated results.

    Tricking the 2015 engine using 2025 SEO techniques is easy. The problem is that Google hasn’t actually been on the winning side of properly ranking quality for maybe 5-10 years, and quietly outsourced the search ranking systems to the ranking systems of the big user sites: Pinterest, Quora, Stack Overflow, Reddit, even Twitter to some degree. If there’s a responsive result and it ranks highly on those user voted sites, then it’s probably a good result. And they got away with switching to that methodology just long enough for each of those services to drown in their own SEO spam techniques, so that those services are all much worse than they were in 2015. And now indexing search based on those sites is no longer a good search result.

    There’s no turning backwards. We need to adopt new rankings for the new reality, not try to turn back to when we were able to get good results.


  • I can’t tell if you were rich, or just not the right age to appreciate that it wasn’t exactly common for a young adult, fresh out of college, to have spare computers laying around (much less the budget to spare on getting a $300-500 secondary device for browsing the internet). If I upgraded computers, I sold the old one used if it was working, or for parts of it wasn’t. I definitely wasn’t packing up secondary computers to bring with me when I moved cities for a new job.

    Yes, I had access to a work computer at the office, but it would’ve been weird to try to bring in my own computer to try to work on it after hours, while trying to use the Internet from my cubicle for personal stuff.

    I could’ve asked a roommate to borrow their computer or to look stuff up for me, but that, like going to the office or a library to use that internet, would’ve been a lot more friction than I was willing to put up with, for a side project at home.

    And so it’s not that I think it’s weird to have a secondary internet-connected device before 2010. It’s that I think it’s weird to not understand that not everyone else did.






  • All the other answers here are wrong. It was the Boeing 737-Max.

    They fit bigger, more fuel efficient engines on it that changed the flight characteristics, compared to previous 737s. And so rather than have pilots recertify on this as a new model (lots of flight hours, can’t switch back), they designed software to basically make the aircraft seem to behave like the old model.

    And so a bug in the cheaper version of the software, combined with a faulty sensor, would cause the software to take over and try to override the pilots and dive downward instead of pulling up. Two crashes happened within 5 months, to aircraft that were pretty much brand new.

    It was grounded for a while as Boeing fixed the software and hardware issues, and, more importantly, updated all the training and reference materials for pilots so that they were aware of this basically secret setting that could kill everyone.




  • It’s worth pointing out that browser support is a tiny, but important, part of overall ecosystem support.

    TIFF is the dominant standard for certain hardware and processes for digitizing physical documents, or publishing/printing digital files as physical prints. But most browsers don’t bother supporting displaying TIFF, because that’s not a good format for web use.

    Note also that non-backwards-compatible TIFF extensions are usually what cameras capture as “raw” image data and what image development software stores as “digital negatives.”

    JPEG XL is trying to replace TIFF at the interface between the physical analog world and the digital files we use to represent that image data. I’m watching this space in particular, because the original web generation formats of JPEG, PNG, and GIF (and newer web-oriented formats like webp and avif) aren’t trying to do anything with physical sensors, scans, prints, etc.

    Meanwhile, JPEG XL is trying to replace JPEG on the web, with photographic images compressed with much more efficient and much higher quality compression. And it’s trying to replace PNG for lossless compression.

    It’s trying to do it all, so watching to see where things get adopted and supported will be interesting. Apple appears to be going all in on JXL, from browser support to file manager previews to actual hardware sensors storing raw image data in JXL. Adobe supports it, too, so we might start to see full JXL workflows from image capture to postprocessing to digital/web publishing to full blown paper/print publishing.