• 0 Posts
  • 67 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • I had an opposing shower thought the other day so I’m going to play devil’s advocate on this one.

    I think in a world of rational, good-faith actors (which I’m not arguing we live in), this is both by-design, and optimal at society scale.

    Think about those things you’re good at, and the things you’re not so good at. I’m really good with computers, my time is most efficiently spent troubleshooting and building technology stacks. This skillset is in demand enough that I make a comfortable living doing it.

    I’m comfortable enough that I have time to learn other skills when needed, but not comfortable enough to hire out all the otherwise commodity tasks I need done. A leak in the roof, a sink that needs replacing, some cat6 through the walls, leveling a floor before replacing broken tile from the 80’s… You get the idea. I can do drywall and other general contractor work but I’m not great at it. It takes me longer to end up with a worse end product than a professional, and I don’t enjoy doing it.

    Every Saturday I spend doing drywall could, at society-scale, be much more efficiently spent building a k8s cluster or helping a scientist build software for research. Just like the guy doing my drywall should have a me on the other end of a phone when he needs a new laptop, or his mother gets malware.

    When people hit “rich” the unspoken meaning is supposed to be that their time is valuable enough that society deems it more useful to spend it outside of commodity tasks. That seems like a good fundamental design… say what you will about its current real-world implementation.




  • Generally power supplies are the most electrically efficient at 20-60% utilization, so there’s no issue with over-provisioning power, other than the (generally minor) upfront extra cost, which might very well pay for itself in the first months/years of usage. I’ll take a look and see what I can find on those sites.

    Edit: okay, trying to shop through google translate / currency calculator is actually aids so I’m gonna teach a man to fish instead. This is what I should have done from the start anyway.

    Power supply: Anything from a decent brand, at basically anything >450W. a 650W or 850W is totally fine if it’s at a decent price. They only draw the power they need, they don’t just constantly pull 850W if the downstream components aren’t calling for it.

    CPU: 12400 is a fine cpu for what you’re doing. You’ll transcode at 720p no problem, 1080p maybe a single stream in real-time. I wouldn’t bank on more than that. Only downsides here are the relatively shallow core counts if you ever expanded into other workloads. Without access to used xeon boards/cpus, it might be a reasonable choice though. What I would say is look for something older but with more cores/threads if you can. For example, a 10900 or even 10700k would probably be a better server cpu than a 12400.

    Memory: DDR4 platforms are a great way to save money, as long as you aren’t planning on expanding to inferencing on cpu. Get as much as you can. 32-64gb of ddr4 should be dirt cheap, especially if you find a cheap motherboard with 4 memory sockets.

    Motherboard: If you want this thing to be versatile, you want 2x pci-e slots. Old gaming full-sized ATX boards are the way to go here. 1 slot for an HBA, 1 slot for a GPU, and that should be all you need. Bonus for as many open sata sockets as possible. 6-8 is pretty typical on 10th-12th gen gaming ATX boards.

    GPU: gpus will be much more efficient at transcoding than an igpu, especially from older intel CPUs. A 1050, 2060, 3050, basically anything from the 10-series onward has a decent nvenc encoder that would work well with plex/jellyfin. My goto is generally old workstation cards, I use a p620 myself and it handles a single 4k encode job no problem. I’m not sure if they’re viably purchasable anywhere in your area, but I’d definitely look out for a P620, P1000, or T400. Great value in those cards.

    Drives/HBA: there are inexpensive LSI HBA cards to expand how many drives you can attach to a system if you need them, all you need is a spare pci-e slot and a place to physically mount the drives. The cheapest way to start here is to look for a motherboard with 4-6 sata slots and use those. Hardware raid is functionally dead these days in the real world, just use zfs or mdadm under linux to create an array with your desired level of resiliency/capacity.

    Once you’ve priced out what it would cost to buy all of this new, look for prebuilt gaming PCs and office PCs that might be able to be expanded to fit these requirements. Prices look kind of steep on those markets you listed, but I’m sure something exists if you look hard enough.






  • Dran@lemmy.worldtolinuxmemes@lemmy.worldThe Return Home
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    Remote assistance is not rdp, it’s Microsoft’s support hook over the Internet, which requires telemetry to function. It is distinctly separate from, and not a prerequisite for RDP.

    The rest of that I’ll have to look into, but disabling remote assistance seems sane in that context.

    I wonder if other parts of the shutdown dialog or hover context menu have phone home functions that can only be disabled in roundabout ways; it wouldn’t be the first time. It would not surprise me to learn that the “which apps are preventing shutdown” dialog would be something that triggers a call to phone that data home.





  • “simple majority” is a technical term in this context, it refers to any number >50%. In the context of the Senate, that’d be a 51/49 split, or a 50/50 split broken by the VP.

    There are some procedural measures that explicitly only require this simple majority to pass; most bills require a 60/40 in practice because that’s the threshold required to bypass a procedural filibuster. They at the very least require a simple majority + 0 members of a body opting to invoke filibuster.

    Say what you will about the people we’ve currently elected; I just stand by it being a sound procedural practice.




  • Anecdotally, I use it a lot and I feel like my responses are better when I’m polite. I have a couple of theories as to why.

    1. More tokens in the context window of your question, and a clear separator between ideas in a conversation make it easier for the inference tokenizer to recognize disparate ideas.

    2. Higher quality datasets contain american boomer/millennial notions of “politeness” and when responses are structured in kind, they’re more likely to contain tokens from those higher quality datasets.

    I haven’t mathematically proven any of this within the llama.cpp tokenizer, but I strongly suspect that I could at least prove a correlation between polite token input and dataset representation output tokens


  • There are a lot of moderates that are hesitant about AOC. She’s expressed ideas like getting rid of the filibuster, which would be great while “your” party is in charge, but is one of the very few checks available for a minority party to halt truly controversial legislation. The extra steps are kind of dumb, but the foundational idea that legislation should at least require a 60/40 majority most of the time enforces an idea of compromise and representation in almost every bill.

    I would shudder to think what a bad president could put through if unchecked by the opposition party in an essentially 50/50 politically divided populace.