• 1 Post
  • 29 Comments
Joined 1 year ago
cake
Cake day: January 25th, 2024

help-circle


  • you’ve just raised all boats by the same amount. There’s no relative difference, it won’t have any impact on the economy

    Technically, that’s not exactly true, specifically because of wealth disparities. If you give everyone $100, someone who only had $100 before will get a 100% increase to their net worth, whereas a billionaire will get a 0.00001% increase to their net worth. It’s effectively a wealth redistribution. If you gave everyone a billion dollars, assuming everyone had nothing to start with before, they’d now have 50% of what a billionaire (now with $2B after gaining $1B from this change) would have, whereas before they’d only have a tiny fraction.

    The problem is that it’s just not a very effective method at doing such wealth redistribution. The much more effective one is to print new currency and issue it like you would UBI, but tax billionaires a similar amount to offset the inflation caused. Releasing all this gold would just devalue gold similarly, and a lot of gold is already owned directly or by proxy, by the wealthy, but also poorer members of society, so it would effectively be like taxing billionaires, but also adding a little tax on top of specific working people for the hell of it, which isn’t ideal.


  • It would have a similar effect to printing new USD and issuing it evenly to members of the population, since our gold reserve is largely a stockpile not expected to be sold on the market.

    Gold gets released into circulation, the value of gold decreases, the value people individually receive is similar to the amount lost by those holding gold.

    That effectively means it would likely be a transfer of value from gold hoarders, some of which are relatively wealthy compared to the rest of the population, to everyone else, rather than some magical new source of value to give to people. (not exactly, obviously, but this is generally what I’d expect based on how the currency dynamic works with our existing USD reserves/printing capabilities, and how the supply rush would be similar with gold compared to USD)

    Should we do it? I don’t know, it could be beneficial, but I’d rather we simply issued new currency and taxed the billionaires more to compensate for any inflation caused, rather than the government having to spend all the money on manpower and negotiation for the sale of all the gold we have, so that individuals could receive actually functional currency in the form of USD.


  • . In absolutely no way did I even mention black people.

    How did you not underatand that it was an analogy? I was testing your logic, by demonstrating that your exact argument can identically be applied to racist arguments, yet you would probably not see it as valid in that context, thus your own logic in this situation falls short.

    People can have differences in opinion, but sometimes, those opinions are harmful, and there’s a reason why people are so angry at you past just simply disagreeing on logic.


  • I must thank you for proving my point.

    In what way? You just dismissed everything i said by not responding to it then acted like I’d proved you correct.

    But you simply can’t accept that people disagree with gender ideology and must try and push your beliefs onto others, in this case, me.

    “But you simply can’t accept that black people are inherently less intelligent, and must try to push your beliefs on others”

    Do you see how this argument fails? Sometimes, people are just wrong, and hold opinions that cause societal harm. You haven’t been capable of refuting the evidence I provide, instead choosing to ignore it, then continue perpetuating the exact justification used every time trans people are oppressed in any way.


  • Anything else is a birth defect

    Any exception disproves your rule. If you say there are only 1 and 2, and I show you 3, then the statement that only 1 and 2 exist is false, because it’s only true if no other numbers ever exist. Show me a binary, I show you numbers outside that binary, it’s not a binary.

    Sex and gender are fundamentally the same thing

    What genetic code determines things like:

    1. Women wearing skirts (…while not being socially accepted for men. Except in Ireland, with Kilts, where it is, because this is cultural, not biological)
    2. Men being louder and more aggressive (on average)
    3. Women being better cooks (on average)
    4. Socially accepted hobbies/personality traits of men/women
    5. Your preference for “pink”/“blue” toys (e.g. toys usually promoted to only girls/boys, like dolls, which we have no evidence kids naturally pick along a binary line unless taught to by parents/guardians)

    Oh wait, what’s that? None of that is biological, but it’s all traditionally gendered traits? Interesting, maybe biological characteristics and social ones aren’t the same.

    What about someone with Androgen Insensitivity Syndrome (AIS), where someone can have XY (usually male) chromosomes, but goes through female sexual development? Or someone with Mosaicism, who has a split of XX and XY chromosomes in their body, could have the genitalia of either group, or ambiguous genitalia, and who’s split of chromosomes across their cells could be as high as 50%, or shift in one direction or the other over time. Or someone who has chromosome patterns that don’t fit into XX or XY, like XXXXX. (yes, that’s a real combination of chromosomes that humans can have.)

    You cannot easily classify these people into sex categories, and no definition you make for sex and gender being the same thing will be capable of properly resolving which group these people fall into. You’ll end up putting ambiguous people into categories that don’t align with how they internally feel about themselves, you’ll find ways to accidentally lump cis people into categories they don’t fit in by trying to define these people into male or female categories, and that means it’s impossible to make a definition that covers every single one of these people and neatly fits them into the categories you think only exist in a rigid binary, and by extension, any attempt to assign them to man/woman categories will only demonstrate how subjective the entire thing is in the first place.

    Even just the fact that various traits traditionally assigned to men/women (e.g. high heels originally being worn by men) have shifted over time to being in different categories, and that different ways of self-expression, and experience, have developed over time, disproves the notion that there is this simplistic binary of human experience that cannot be un-aligned from your sex, or that certain traits are tied to sex as opposed to entirely social expectations.

    And they absolutely do not have the right to start throwing abuse and words like transphobe around simply because beliefs don’t match.

    Your position is categorically hostile to their existence. The definition of transphobia includes “fear or dislike of transgender and non-binary people” If you dislike what they believe, and by extension, what they are, then you are categorically transphobic. You can agree and say that you believe being transphobic is correct, but you still definitionally dislike trans people, and thus fit the definition.


  • If I were to say “there are two genders (male and female) and you can not change what you were born as” the red mist descends and because my views don’t align I get called a phobe or ist or a bigot. They simply can’t accept that not everyone shares their ideology.

    Because that statement is not just fundamentally wrong, (male and female aren’t genders, they’re sexes, even sex is a spectrum of characteristics that can’t be cleanly defined in 100% of cases, so a blanket statement that only 1 and 2 exist when 3, 4, 5, etc do as well fundamentally fails even when it comes to sex, let alone social identity characteristics and expression) but it is used to justify erasing trans people from existence, and is the core statement that allows for anti-trans policies to exist.

    That statement is directly used to justify and further policies that directly harm trans people, and thus it isn’t just a difference in opinion, but a clear and obvious case of intolerance that we know leads to real harm.

    If you’d like any further explanation of why exactly that statement is incorrect, I’d be happy to provide it.

    As for the right starting the abuse just look at the Reform member conference in Cornwall last week.

    Apologies, but considering I’m American, I don’t have much of a personal social context for the events, so do take my opinions here with the understanding I don’t follow UK politics much. I agree that any violence there was likely extreme, at least based on my very limited understanding of the party’s politics, but that is, of course, what seems to be an isolated incident.

    As I don’t think we share as much direct societal context, I’m fine with dropping this point against your argument if you don’t wish to continue it, especially considering it’s a little subjective in terms of, say, statistically determining which group is more likely to be aggressive, since I haven’t seen many actual studies or meta-analyses on that particular topic in specific.


  • they preach tolerance but are ALWAYS the ones to start the abuse, insults, name calling and threats when they are disagreed with.

    First off, the group that I’ve always experienced starting with outright hate and name calling has been the right. Look at two protests, one by leftists, one by the right wing, on the same issue, and you will almost always find the most aggressive, slur using, name calling people on the right making themselves known far before anyone on the left will actually start doing anything even remotely similar.

    And secondly, tolerance doesn’t work when dealing with the intolerant. Consider this: Hitler is a brand new figure, comes into the public square, and starts preaching his views. Do we tolerate him, or do we not tolerate him? We should tolerate him, because after all, tolerance is good, right? Well, of course not, because his ideology is intolerant, and directly attacks the tolerant, extinguishing them from society.

    The only way you maintain tolerance is by being intolerant of intolerance.

    If a conservative states that trans people shouldn’t be allowed to exist in public spaces, and the left shuns that person and ostracizes them, the left is being intolerant, but so is the conservative, who if they had their way, would have then eliminated far more presumably tolerant trans people from public life, if given the chance.

    However, conservatives will then frame this as the left being intolerant, and act as if it’s some kind of hypocrisy to try and preserve tolerance by being intolerant of intolerant ideologies.

    On a place light Reddit most subs will just ban you for showing any right leaning opinions.

    Because many subs have moderators that respect marginalized groups that are often the ones attacked by conservatives.

    If someone comes into your community, and begins spouting off an ideology that’s explicitly harmful to members of that group, the most tolerant thing a moderator can do when given two choices:

    1. Tolerate the conservative and let them spout hate
    2. Don’t tolerate the conservative and prevent them from spouting hate

    is the second, because otherwise your community is now persistently allowing in someone who is intolerant of the others in the community.






  • The recipient doesn’t get any identifying data about you, because the data that shows the link was clicked does not identify you as an individual, since it’s passed through privacy-preserving protocols.

    To further clarify the exact data available to any party:

    • The ad marketplace only knows that someone, somewhere clicked the link.
    • Mozilla knows that roughly x users have clicked sponsored links overall.
    • The company you went to from that sponsored link knows that your IP/browser visited at X time, and you clicked through a sponsored link from the ad marketplace

    There isn’t much of a technical difference between this, and someone seeing an ad in-person where they type in a link, from a practical privacy perspective.

    Their implementation is completely different from traditional profile/tracking-based methods of advertising.


  • Citation needed. How did you calculate that statistical probability, my friend?

    I don’t, because I don’t spend all my time calculating the exact probability of every technology to exist harming or not harming people. You also did not provide any direct mathematical evidence when trying to argue the contrary, that these things actually do cause more harm than they provide a benefit even if they’re created to do good things. We’re arguing on concepts here.

    That said, if you really think that things made to be bad, with only a chance at doing something good later will have the same or larger chance of doing bad things as something created to be good, with only a chance of doing something bad later on, then I don’t see how it’s even possible to continue this conversation. You’re presupposing that any technology you view as harmful has automatically done more harm than good, without any reason whatsoever for doing so. My reasoning is simply that harm is more likely to occur from something created to do it from the start, rather than something with only a chance of becoming bad.

    Something with a near 100% chance of doing harm, because it was made for that purpose, generally speaking, won’t do less harm than something with less than a near 100% chance of doing it from the start, because any harm would only be a possibility rather than a guarantee.

    So you are open to the possibility that nukes are less dangerous than spears, but more dangerous than AI? Huh.

    I’m open to the idea that they’ve caused more deaths, historically, since that’s the measure you seemed to be going with when you referenced the death toll of nukes, then used other things explicitly created as weapons (guns, spears, swords) as additional arguments.

    I don’t, however, see any reason for AI being more likely to cause significant harm, death or otherwise, compared to say, the death toll of spears, and I don’t think nukes are less harmful than spears directly, because they’re highly likely to cause drastically larger amounts of future death and environmental devastation, which I back up based on the fact that countries continue to expand their stockpiles, increasingly threatening nuclear attacks as a “deterrent,” while organizations such as Bulletin of the Atomic Scientists continue to state that the risk of future nuclear war is only growing. If we talk about current death tolls, sure, they’ve probably done less, but today is not the only time by which we can judge possible risk.

    According to whom? How are you defining harm and benefit? You’re attempting to quantify the unquantifiable.

    Yes, you’ve discovered moral subjectivity. Good job. I define harm and benefit based on what causes/prevents the ability of humans to experience the largest amount of happiness and overall well-being, as I’m a Utilitarian.

    Ah of course, because human beings famously never use or do anything that makes them less happy. Human societies have famously never implemented anything that makes people less happy. Do we live on the same planet?

    Your argument was based on things that are entirely personal, self-driven positions, such as finding AI to be a better partner. If people didn’t enjoy that more, then they wouldn’t be seeking out AI partners when specifically trying to find someone that will provide them with the most overall happiness. Of course people can do things that make them less happy, all I’m saying is that you’re not providing any evidence for why people would do so, in the scenarios you’re providing. You’re simply assuming not only that AI will develop into something that can harm humans, but that humans will also choose to use those harmful things, without explaining why.

    Again, apologies if my wording was unclear, but I’m not saying humans are never self-destructive, just that you’ve provided no evidence as to why they would choose to be that way, given the circumstances you provided.

    I’m utilizing my intelligence and my knowledge about human nature and human history to make an educated guess about future possible outcomes.

    I would expect you to intuitively understand the reasons why I might believe these things, because I believe they should be fairly obvious to most people who are well educated and intelligent.

    No, I don’t intuitively understand, because your own internal intuitive understanding of the world is not the same as mine. We are different people. This answer is not based in anything other than “I feel like it will turn out bad, because humans have used technology bad before.” You haven’t even been capable of showing it’s even possible for AI to become that capable in the first place, let alone show the likelihood of it being developed to do those bad things, and also get implemented.

    This is like arguing that our current weapons will necessarily lead to the development of the Death Star, because we know what a Death Star could be, weapons are improving, and humans sometimes use technology in bad ways. I don’t just want your “intelligence and knowledge about human nature and human history” to back up why our weapons will necessarily create the Death Star, I want you to show that it’s even possible, and demonstrate why you think it’s likely we choose to develop it to that specific point. I hope that analogy makes sense.

    Hence why I suspected you of using AI, because you repeatedly post walls of text that are based on incredibly faulty and idiotic premises.

    Sorry for trying to explain myself with more nuance than most people on the internet. Sometimes I type a lot, too bad I guess.

    Cheers mate, have a good one.

    You as well.


  • I’m sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs. I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.

    My apologies, I’m simply quite used to people arguing against AI using specifically the automation of jobs as their primary concern, and assumed that it was a larger concern of yours when it came to the “consequences.” of AI as a concept.

    If you actually understood my point, you wouldn’t be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences.

    Obviously, but the statistical probability of a thing being used for bad purposes, especially in a way that outweighs the benefit of the technology itself, is always higher for a thing designed to be harmful from the start, as opposed to something started with good intentions. That doesn’t mean a thing created to be harmful can’t do or cause a good thing later on, but it’s much less likely to than something designed to help people as its original goal.

    We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives.

    Had we not invented our uses of fire, would we have any of the comforts, standard of living, and capabilities that we do now? Would we be able to feed as many people as we do, keep our food safe and prevent it from spoiling, keep ourselves from dying in the winter, etc? Fire has brought a larger benefit than it has harms.

    We intended to use the printing press to spread knowledge and understanding, but it ultimately came to spread hatred and fear.

    While some media is used to spread hatred and fear, a much worse scenario is one in which no media can be spread at the same scale, and information dissemination is instead entirely reliant on word of mouth. This means extremely delayed knowledge of current events, an overall less informed population, and all the issues that come along with disseminating knowledge through a literal game of telephone. Things get lost, mixed up, falsified, and so on, and the ability to disseminate knowledge quickly can make those things much less likely.

    Will they still happen? Sure. But I’d prefer a well-informed world that is sometimes subjected to misinformation, fear, and hate, to a world where all information is spread via ever-changing word of mouth, where information can’t be easily fact-checked, shared, or researched, and where rumors can very frequently hold the same validity as fact for extended periods of time without anyone even being capable of checking if they’re real.

    The printing press has brought a larger benefit than it has harms. Do you see the pattern here?

    And again, nuclear weapons have been used twice in wartime. Guns, swords, spears, automobiles, man made famines, aeroplanes, literally hundreds of other technologies have killed more human beings than nuclear weapons have.

    Just because nuclear weapons make a big boom doesn’t make them more destructive than other technologies.

    Cool, I never once stated that Nukes were more deadly than any of these other examples provided. I only stated that I don’t believe that AI is more dangerous than nukes, in contrast to your original statement.

    Nuclear fission has also provided one of the cleanest sources of energy we possess,

    Nuclear fission research was taking place before the idea of using it for a deadly bomb was even a thing. The development of nuclear bombs came afterwards.

    What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don’t give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member of a community or family unit.

    A few points on this one. Firstly, just because a technology can be used, I don’t necessarily think it should. If a tool is better than humans at something (let’s say AI becomes good enough to automate all woodworkers with physical woodworking robots adapted for any task) I’ll still support allowing humans to do that thing if it brings them joy. (People could simply still do woodworking, and I could get a table from one of them instead of from the AI, just because I feel like it.) The use of any technology after it’s developed is not an inevitability, even if it’s an option.

    Secondly, I personally believe in doing what I can to maximize overall human happiness. If AI was better at raising children, but people still wanted to enjoy raising children, and we didn’t see any demonstrable negative outcomes from having humans raise children instead of AI, then I would support whatever mechanism the parents preferred based on what they think would make them more happy, raising a child, or not.

    If AI was a better romantic partner, in the sense that people broadly preferred AI to real people, and there wasn’t evidence that such a trend increasing would make people broadly more unhappy, or unsatisfied with life, then I’d support it, because it wouldn’t be doing any harm.

    Ask yourself why you consider such things to be bad in the first place. Is it because you personally wouldn’t enjoy those things? Cool, you wouldn’t have to. And if society broadly didn’t enjoy those things, then nobody would use them in the first place. You’re presupposing both that society would develop and use AI for those purposes, but also not actually prefer using them, in which case they wouldn’t be a replacement, because no society would choose to implement them.

    This is like saying “what if we gave everyone IV drips that gave them dopamine all the time, but this actually destroyed the fabric of society and everyone was less happy with it?” Great, then nobody will use the IVs because they make them less happy than not using the IVs.

    This entire argument assumes two contradictory things: That society will implement a thing to replace people because it’s better, and they’d prefer to use it, but also that society will not prefer to use it because it will make them less happy. You can’t have both.

    As far as I can tell, all three of your initial retorts about the relative danger of nuclear weapons are basically incoherent word salads. Even if I were to concede your arguments regarding the relative dangers of AI (which I am absolutely not going to do, although you did make some good points), you would still be wrong about your initial statement because you clearly overestimated the relative danger of nuclear weapons.

    Your only argument here for why AI would be relatively more dangerous is… “it could be.” Simply stating that in the future, it may get good enough to do X or Y, and because that’s undesirable to you, therefore the technology as it exists now will obviously do those things if allowed to progress.

    Do you have any actual evidence or reason to believe that AI will do these things? That it will ever even be possible for it to do X or Y, that society would simultaneously willingly implement it while also not wanting it to be implemented because it harms them, or that the current trajectory of the industry even has a chance of driving the development of technologies that would ever be capable of those things?

    Right now, the primary developments in “AI” are just better LLMs, which are just word probability predictors. Sure, they’re getting better at predicting the probability of words, but how would that lend itself to practically, say, raising a child?

    I essentially dismantled your position from both sides, and yet you refuse to concede even a single inch of ground, even on the more obvious issue of nuclear weapons only being responsible for a relatively paltry number of deaths.

    And how many people has AI killed today? Oh wait, less than nuclear bombs? Just because today nukes haven’t yet been responsible for a large number of deaths, but AI might be in the future, then stating that AI is possibly more dangerous than nuclear bombs must be correct!

    You’re making arguments from two completely different points in time. You’re saying that because nukes haven’t yet killed as many people as you think that AI will do in the future, they are therefore less dangerous. (Even while nukes still pose a constant threat, that can cause a chain reaction of deaths given the right circumstances, in the future) Unless you can substantiate your claim with some form of evidence that shows AI is likely to do any of these dangerous things on our current trajectory, you’re arguing current statistics against a wholly unsubstantiated, imagined future, and then saying you’re correct because in what you think the future will be like, AI will actually be doing all these bad things that make it worse than nukes.

    Substantiate why you think AI will ever even get to that point, and also be implemented in a way that damages society, instead of just assuming the worst case scenario and assuming it’s likely.


  • Ask the lawmakers who wrote the laws with vague language, because according to them, that kind of activity could be considered a sale.

    As a more specific example that is more one-sided, but still not technically a “sale,” Mozilla has sponsored links on the New Tab page. (they can be disabled of course)

    These links are provided by a third-party, relatively privacy protecting ad marketplace. Your browser downloads a list of links from them if you have sponsored links turned on, and no data is actually sent to their service about you. If you click a sponsored link, a request is sent using a protocol that anonymizes your identity, that tells them the link was clicked. That’s it, no other data about your identity, browser, etc.

    This generates revenue for Mozilla that isn’t reliant on Google’s subsidies, that doesn’t actually sell user data. Under these laws, that would be classified as a sale of user data, since Mozilla technically transferred data from your device (that you clicked the sponsored link) for a benefit. (financial compensation)

    However, I doubt anyone would call that feature “selling user data.” But, because the law could do so, they have to clarify that in their terms, otherwise someone could sue them saying “you sold my data” when all they did was send a small packet to a server saying that some user, somewhere clicked the sponsored link.



  • Plus they don’t support GrapheneOS. (or rather, GrapheneOS doesn’t support them due to it being too expensive to support more than one model while also not having the same hardware integrity measures that Pixels have) It’s the only thing stopping me from getting them for my next phone, because while I don’t necessarily need the fastest processor, highest resolution screen, etc, I do need a phone that won’t break over time until it becomes useless in a few years.


  • ArchRecord@lemm.eetoComic Strips@lemmy.world"Politics"
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    8 days ago

    I’m genuinely not sure if I’m being too sensitive or if this is genuinely behavior that shouldn’t be supported.

    There’s nothing inherently wrong with that content existing, and being something people can pay for, but you’re also not being too sensitive for not personally wanting to pay that artist, if your surrounding circumstances would make the access to explicit content then seem a little unsavory in your particular case.

    Ideally, that artist would let you pay for just the non-NSFW content, or simply send a tip/donation directly, instead of requiring the NSFW content to be bundled with any attempt at payment, but that doesn’t mean that offering NSFW content itself “shouldn’t be supported,” even if it’s not desirable in your case.