• 1 Post
  • 20 Comments
Joined 3 years ago
cake
Cake day: June 8th, 2023

help-circle



  • A piece of plastic broke off from my laptop once. It was supposed to hold one of the two screws fixing the cover of the RAM & drive section and now there was just a larger round hole. I’ve measured the hole and the screw, designed a replacement in Blender (not identical, I wanted something more solid and reliable) and printed it; took two attempts to get the shape perfectly right. Have had zero issues with it in all these years.








  • Because we have tons of ground-level sensors, but not a lot in the upper layers of the atmosphere, I think?

    Why is this important? Weather processes are usually modelled as a set of differential equations, and you want to know the border conditions in order to solve them and obtain the state of the entire atmosphere. The atmosphere has two boundaries: the lower, which is the planet’s surface, and the upper, which is where the atmosphere ends. And since we don’t seem to have a lot of data from the upper layers, it reduces the quality of all predictions.





  • I don’t focus on recommendations specifically. My typical process is:

    • spend anywhere from a few days to a few weeks figuring out which technical characteristics are important for this kind of product, which aren’t, why and when &c. This kind of information is usually available (and even obvious SEO garbage can give you new keywords to consider when searching);
    • based on these alone, determine what’s acceptable and what’s desirable for you;
    • if you haven’t already, find some kind of community around the topic and see which brands/manufacturers people commonly complain about and why; also see if there’re popular manufacturers only selling things via their own websites;
    • open your preferred store (or several) and filter the entire category based on what you’ve learned. Pick a few candidates and examine them closely;
    • go back to the community again and look up anything mentioning these candidates - including comparisons with other ones you haven’t considered. Perhaps consider them;
    • make the final choice.

    Skip some of these if irrelevant or if you don’t care enough. Spend extra time if you care a lot.

    It works well enough for every new phone (the market there is changing fast, so you start anew every time), it worked for my first PC I’ve decided to assemble with 0 prior knowledge, the mechanical keyboard and the vertical mouse, and pretty much every piece of tech I’m buying.

    And I’d say it’s reasonable to use Reddit without an account even if you disagree with what the platform owners are doing. The data is still valuable for such use cases.




  • I’m using local models. Why pay somebody else or hand them my data?

    • Sometimes you need to search for something and it’s impossible because of SEO, however you word it. A LLM won’t necessarily give you a useful answer, but it’ll at least take your query at face value, and usually tell you some context around your question that’ll make web search easier, should you decide to look further.
    • Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
    • Using a LLM in scripts adds a semantic layer to whatever you’re trying to automate: you can process a large number of small files in a way that’s hard to script, as it depends on what’s inside.
    • Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
    • Some use RAG to query large amounts of information. I think it’s a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
    • Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I’m going to at some point. The 8B version of LLaMA 3 should be good and quick enough.


  • it cuts out the middle man of having to find facts on your own

    Nope.

    Even without corporate tuning or filtering.

    A language model is useful when you know what to expect from it, but it’s just another kind of secondary information source, not an oracle. In some sense it draws random narratives from the noosphere.

    And if you give it search results as part of input in hope of increasing its reliability, how will you know they haven’t been manipulated by SEO? Search engines are slowly failing these days. A language model won’t recognise new kinds of bullshit as readily as you.

    Education is still important.