

It wasn’t the buffer itself that drew power. It was the need to physically spin the disc faster in order to read the data to build up a buffer. So it would draw more power even if you left it physically stable. And then, if it would actually skip in reading, it would need to seek back to where it was to build up the buffer again.
I’m not sure that would work. Admins need to manage their instance users, yes, but they also need to look out for the posts and comments in the communities hosted on their instance, and be one level of appeal above the mods of those communities. Including the ability to actually delete content hosted in those communities, or cached media on their own servers, in response to legal obligations.
Yes, it’s the exact same practice.
The main difference, though, is that Amazon as a company doesn’t rely on this “just walk out” business in a capacity that is relevant to the overall financial situation of the company. So Amazon churns along, while that one insignificant business unit gets quietly shut down.
For this company in this post, though, they don’t have a trillion dollar business subsidizing the losses from this AI scheme.
They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.
Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.
Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.
So what do you mean by 48% accurate? That’s not enough information to do anything with.
Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.
Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.
Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.
Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.
Teslas will (allegedly) start on a small, low-complexity street grid in Austin. exact size TBA. Presumably, they’re mapping the shit out of it and throwing compute power at analyzing their existing data for that postage stamp.
Lol where are the Tesla fanboys insisting that geofencing isn’t useful for developing self driving tech?
Wouldn’t a louder room raise the noise floor, too, so that any quieter signal couldn’t be extracted from the noisy background?
If we were to put a microphone and recording device in that room, could any amount of audio processing be able to extract the sound of the small server out from the background noise of all the bigger servers? Because if not, then that’s not just a auditory processing problem, but a genuine example of destruction of information.
taking a shot at installing a new OS
To be clear, I had been on Ubuntu for about 4 years by then, having switched when 6.06 LTS had come out. And several years before that, I had previously installed Windows Me, XP beta, and the first official XP release on a home-built, my first computer that was actually mine, using student loan money paid out because my degree program required all students have their own computer.
But freedom to tinker on software was by no means the flexibility to acquire spare hardware. Computers were really expensive in the 90’s and still pretty expensive in the 2000’s. Especially laptops, in a time when color LCD technology was still pretty new.
That’s why I assumed you were a different age from me, either old enough to have been tinkering with computers long enough to have spare parts, or young enough to still live with middle class parents who had computers and Internet at home.
That’s never really been true. It’s a cat and mouse game.
If Google actually used its 2015 or 2005 algorithms as written, but on a 2025 index of webpages, that ranking system would be dogshit because the spammers have already figured out how to crowd out the actual quality pages with their own manipulated results.
Tricking the 2015 engine using 2025 SEO techniques is easy. The problem is that Google hasn’t actually been on the winning side of properly ranking quality for maybe 5-10 years, and quietly outsourced the search ranking systems to the ranking systems of the big user sites: Pinterest, Quora, Stack Overflow, Reddit, even Twitter to some degree. If there’s a responsive result and it ranks highly on those user voted sites, then it’s probably a good result. And they got away with switching to that methodology just long enough for each of those services to drown in their own SEO spam techniques, so that those services are all much worse than they were in 2015. And now indexing search based on those sites is no longer a good search result.
There’s no turning backwards. We need to adopt new rankings for the new reality, not try to turn back to when we were able to get good results.
I can’t tell if you were rich, or just not the right age to appreciate that it wasn’t exactly common for a young adult, fresh out of college, to have spare computers laying around (much less the budget to spare on getting a $300-500 secondary device for browsing the internet). If I upgraded computers, I sold the old one used if it was working, or for parts of it wasn’t. I definitely wasn’t packing up secondary computers to bring with me when I moved cities for a new job.
Yes, I had access to a work computer at the office, but it would’ve been weird to try to bring in my own computer to try to work on it after hours, while trying to use the Internet from my cubicle for personal stuff.
I could’ve asked a roommate to borrow their computer or to look stuff up for me, but that, like going to the office or a library to use that internet, would’ve been a lot more friction than I was willing to put up with, for a side project at home.
And so it’s not that I think it’s weird to have a secondary internet-connected device before 2010. It’s that I think it’s weird to not understand that not everyone else did.
Getting a smartphone in 2010 was what gave me the confidence to switch to Arch Linux, knowing I could always look things up on the wiki as necessary.
I also think my first computer that could boot from USB was the one I bought in 2011, too. Everything before that I had to physically burn a CD.
Plus if the front end is hashing with each keystroke, I feel like the security of the final hash is far, far, less secure to any observer/eavesdropper.
If the password is hunter2
and the front end sends a hash for h
, then hu
, then hun
, etc., then someone observing all these hashes only has to check each hash against a single keystroke, then move on to the next hash with all but the last keystroke known. That hash table is a much smaller search space, then.
You’re like, so close.
Don’t reuse passwords between different services, or after a password reset. You’re aware of exactly why that’s a bad practice (a compromise of any one of those services, or an old database of those services will expose that password), so why knowingly bear that risk?
My gigabit connection is good enough for my NAS, as the read speeds on the hard drive itself tend to be limited to about a gigabit/s anyway. But I could see some kind of SSD NAS benefiting from a faster LAN connection.
All the other answers here are wrong. It was the Boeing 737-Max.
They fit bigger, more fuel efficient engines on it that changed the flight characteristics, compared to previous 737s. And so rather than have pilots recertify on this as a new model (lots of flight hours, can’t switch back), they designed software to basically make the aircraft seem to behave like the old model.
And so a bug in the cheaper version of the software, combined with a faulty sensor, would cause the software to take over and try to override the pilots and dive downward instead of pulling up. Two crashes happened within 5 months, to aircraft that were pretty much brand new.
It was grounded for a while as Boeing fixed the software and hardware issues, and, more importantly, updated all the training and reference materials for pilots so that they were aware of this basically secret setting that could kill everyone.
It’s not a movie, but the Fallout series had a great first season, and I’m looking forward to the second.
Instead, I actively avoided conversations with my peers, particularly because I had nothing in common with them.
Looking at your own social interactions with others, do you now consider yourself to be socially well adjusted? Was the “debating child in a coffee shop” method actually useful at developing the social skills that are useful in adulthood?
I have some doubts.
It’s worth pointing out that browser support is a tiny, but important, part of overall ecosystem support.
TIFF is the dominant standard for certain hardware and processes for digitizing physical documents, or publishing/printing digital files as physical prints. But most browsers don’t bother supporting displaying TIFF, because that’s not a good format for web use.
Note also that non-backwards-compatible TIFF extensions are usually what cameras capture as “raw” image data and what image development software stores as “digital negatives.”
JPEG XL is trying to replace TIFF at the interface between the physical analog world and the digital files we use to represent that image data. I’m watching this space in particular, because the original web generation formats of JPEG, PNG, and GIF (and newer web-oriented formats like webp and avif) aren’t trying to do anything with physical sensors, scans, prints, etc.
Meanwhile, JPEG XL is trying to replace JPEG on the web, with photographic images compressed with much more efficient and much higher quality compression. And it’s trying to replace PNG for lossless compression.
It’s trying to do it all, so watching to see where things get adopted and supported will be interesting. Apple appears to be going all in on JXL, from browser support to file manager previews to actual hardware sensors storing raw image data in JXL. Adobe supports it, too, so we might start to see full JXL workflows from image capture to postprocessing to digital/web publishing to full blown paper/print publishing.
iPhone 16 supports shooting in JPEG-XL and I expect that will be huge for hardware/processing adoption.