• 0 Posts
  • 7 Comments
Joined 10 months ago
cake
Cake day: August 24th, 2024

help-circle

  • He pleaded guilty to 3 felonies and 6 misdemeanors for not paying $1.4 million over 3 years, including making false deductions and dipping into company funds. That’s not “filling out a form wrong”, and if it is, his father should pardon everyone who has been charged under the bad laws that allow people simply “filling out a form wrong” to catch 9 charges. Especially for the people who couldn’t afford accountants and lawyers to file the form correctly for them.

    Pardoning your own son only for any possible federal crime, not just the ones he was charged with, especially after saying you wouldn’t, is gross nepotism. And the pardon starts from 2014 when the tax and gun charges are for 2016 onwards, which implies there’s more Joe Biden knows about.



  • While this discovery is very cool, this bothered me:

    “Alphabets revolutionized writing by making it accessible to people beyond royalty and the socially elite. Alphabetic writing changed the way people lived, how they thought, how they communicated,”

    Ancient Chinese scripts seemed to manage just fine, even during their “writing is magical and only the rich are smart enough to know that magic” phase. Is it possible that the alphabet itself didn’t change the way people lived, but perhaps the people who introduced it to the area changed the way the original inhabitants lived? The conclusion that the alphabet was the cause just seems really Western exceptionalist to me.




  • References weren’t paywalled, so I assume this is the paper in question:

    Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

    Abstract

    Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.