• 10 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: November 10th, 2024

help-circle
  • I wouldn’t consider this slop.

    Let’s compare this to photography. If you use a camera to take a picture of something, sure, the machine is doing most of the work, but the photographer is playing a vital role in this.

    Now there are photographers that spend a lot of time composing a shot. They’ll mess around with shutter speed, aperture size, ISO, zoom, depth of field, etc. They’ll also figure out the subject matter and may add some other elements to it. Afterwards they’ll make adjustments to the picture with something like Lightroom or Darktable, and maybe touch up some things with Photoshop.

    Then there are people that take pictures with their phone of a computer screen showing something cool happening in a game and post it on Reddit.

    On one end of the spectrum I would consider the photo to be art, on the other I would consider it to be slop. However, there are many degrees between one end of this spectrum to the other.

    With AI tools it’s not much different. The machine is doing a lot of the work, but how much of it is guided, reshaped, or directed by a human? With Image Generating tools you can tweak the seed, the steps, the cfg, the sampler, denoise, etc. You can choose the base model, add multiple LoRAs and embeddings, or train your own if you’re looking for a certain style.

    Then you have users that go to ChatGPT, type in a prompt and have ChatGPT do everything else.

    Like photography, on one end of the spectrum I would consider it art, on the other I would consider it slop.

    But this all begs the question, what is art? How do you draw the line between what art is, and what it is not?


  • If you’ve ever read through the terms of service/use for most websites that artists like to show off their work on (Instagram, Facebook, DeviantArt, ArtStation, Twitter, Reddit, etc.) you would realize that the work was indeed not stolen.

    It was given away freely by artists due to fine print buried in the terms of service with royalty free licenses. Just lookup any Terms of Service and search for the word “royalty”.

    If artists should be going after anyone, it’s the companies that either freely gave the artwork away by “sharing it with their partners” or by making a profit off of their work by selling it to any of these companies for training these image generating models.

    The root of the problem here is the lack of ownership of our own data when it comes to any sort of online service. Part of that problem is just the nature of posting something in the first place.

    DeviantArt

    One artist raised the alarm back in 2016 about the licensing at the time: https://www.deviantart.com/dsc-the-artist/journal/DeviantArt-CAN-USE-your-ART-WITHOUT-PERMISSION-616830749

    ArtStation

    They do allow you to tag your projects now to prohibit them from being sold for use with Generative AI programs, but this option obviously did not exist some years ago.

    You additionally grant a royalty-free, perpetual, world-wide, fully sub-licensable (through multiple tiers) license to Epic limited to using, copying, editing, modifying, inputting, and integrating Your Content into and in connection with the development and testing of Epic’s Safety and Discovery Tools (together with the above license, the “Licenses”).

    Instagram

    When you share, post, or upload content that is covered by intellectual property rights (like photos or videos) on or in connection with our Service, you hereby grant to us a non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). ::

    Etc…



  • Thanks, I missed that detail. It’s probably because of the “no class action” clause that this is a “mass arbitration”.

    Unfortunately that usually means that Google is paying a specific company to decide on the outcome of the case. in this case it looks like American Arbitration Association has a contract with Google.

    They’re supposed to be fair for both sides, but it’s been shown that they almost always rule in favor of the company that has pre-selected them.

    If anyone is in this situation, they will likely have a much better chance by convincing a judge to allow a different 3rd party to arbitrate the case.


  • Following up on this. I sent an email out to the team and got a response already.

    To summarize, they would rather the solution work through updates for security fixes, but they were willing to compromise if automatic updates were disabled with the option for users to manually update somehow:

    Tap for email/response

    Initial email:

    Hi,

    Just a quick question about this point in the bounty:

    - Restore the fridge to its original functionality, by removing any possibility of adverts being presented on the display (all other smart features must be retained)

    When you say, “all other smart features must be retained” does this mean that the solution must retain the ability to allow the fridge to automatically update its firmware if Samsung pushes out a future update?

    Would it be okay if, instead, we disabled the automatic update but still allowed the end user to manually update if they really wanted to?

    Or would it be okay if the end user could just reapply the solution after an official firmware update?

    Thanks,
    <Redacted>

    Response:

    Hey <Redacted>,

    Just chatted with the team, and we think it would be better for it to have updates, and optional ones sounds like a sensible compromise. We don’t want to sacrifice security for control. I hope that answers your question. Thanks!







  • The game client started, but it froze up before it could even make it to the main menu (actually before it even got to full screen).

    I’m able to start up LIVE on Linux without any issues.

    I tested out the cutter attachment, but it was still broken. Hopefully we get to put it to use for something like slicing open doors. It sounds like others could shoot open the interior doors at least.


  • I would guess this is a Quarter 2 release. They’ve got a lot of work already lined up for 4.4 and engineering doesn’t seem to be a part of it.

    There are about 235 vehicles in the game at the moment, and they had less than 1/4 of the ships available for engineering. A decent number of those only had engineering partially implemented.

    I’m assuming this rework of the ships will bring them all up to the level of having physicalized components. That’s going to be a lot of work for a number of teams.

    They want ship armor working before engineering goes out, it sounds like they’re close to getting that in though.

    I’ve seen people mention that shooting the door open works in these situations (as in, during this test). Not the best option but it’s something, for now.

    Nice! I wish I had known that when the servers were up. It would have saved me a bit of time.




  • The study focuses on general questions asked of “market-leading AI Assistants” (there is no breakdown between which models were used for what).

    It does not mention ground.news, or models that have been fed a single article and then summarized. Instead this focuses on when a user asks a service like ChatGPT (or a search engine) something like “what’s the latest on the war in Ukraine?”

    Some of the actual questions asked for this research: “What happened to Michael Mosley?” “Who could use the assisted dying law?” “How is the UK addressing the rise in shoplifting incidents?” “Why are people moving to BlueSky?”

    https://www.bbc.co.uk/aboutthebbc/documents/audience-use-and-perceptions-of-ai-assistants-for-news.pdf

    With those questions, the summaries and attribution of sources contain at least one significant error 45% of the time.

    It’s important to note that there is some bias in this study (not that they’re wrong).

    They have a vested interest in proving this point to drive traffic back to their articles.

    Personally, I would find it more useful if they compared different models/services to each other as well as differences between asking general questions about recent news vs feeding specific articles and then asking questions about it.

    With some of my own tests on locally run models, I have found that the “reasoning” models tend to be worse for some tasks than others.

    It’s especially noticeable when I’m asking a model to transcribe the text from an image word for word. “Reasoning” models will usually replace the ending of many sentences with what it sounded like the sentence was getting at. While some “non-reasoning” models were able to accurately transcribe all of the text.

    The biggest takeaway I see from this study is that, even though most people agree that it’s important to look out for errors in AI content, “when copy looks neutral and cites familiar names, the impulse to verify is low.”






  • I agree with what you said. The only thing I want to point out is with your statement:

    (+ its better for the environment)

    Running models locally doesn’t necessarily mean that it’s better than the environment. Usually the hardware at cloud data centers is far more efficient at running intense processes like LLMs than your average home setup.

    You would have to factor in whether your electricity provider is using green energy (or if you have solar) or not. And then you would also have to factor in whether you’re choosing to use a green data center (or a company that uses sustainable data centers) to run the model.

    That being said, (in line with what you stated before) given the sensitive nature of the conversations this individual will be having with the LLM, a locally run option (or at least renting out a server from a green data center) is definitely the recommended option.




  • Why not create comparison like “generating 1000 words of your fanfiction consumes as much energy as you do all day” or something more easily to compare.

    Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.

    That’s about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).

    Or, on the other end of the spectrum, if you’re browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).





  • I think this would only be acceptable if the “AI-assisted” system kicks in when call volumes are high (when dispatchers are overburdened with calls).

    For anyone that’s been in a situation where you’re frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn’t matter too much because the guy had already been dead for a bit.

    And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.

    Being able to talk to a human right away is way better than essentially being asked to “press 1 if this is really an emergency, press 2 if this is not an emergency”.


  • I had to click to figure out just what an “AI Browser” is.

    It’s basically Copilot/Recall but only for your browser. If the models are run locally, the information is protected, and none of that information is transmitted, then I don’t see a problem with this (although they would have to prove it with being open source). But, as it is, this just looks like a browser with major privacy/security flaws.

    At launch, Dia’s core feature is its AI assistant, which you can invoke at any time. It’s not just a chatbot floating on top of your browser, but rather a context-aware assistant that sees your tabs, your open sessions, and your digital patterns. You can use it to summarize web pages, compare info across tabs, draft emails based on your writing style, or even reference past searches.

    Reading into it a bit more:

    Agrawal is also careful to note that all your data is stored and encrypted on your computer. “Whenever stuff is sent up to our service for processing,” he says, “it stays up there for milliseconds and then it’s wiped.” Arc has had a few security issues over time, and Agrawal says repeatedly that privacy and security have been core to Dia’s development from the very beginning. Over time, he hopes almost everything in Dia can happen locally.

    Yeah, the part about sending my data of everything appearing on my browser window (passwords, banking, etc.) to some other computer for processing makes the other assurances worthless. At least they have plans to get everything running locally, but this is a hard pass for me.