Big tech has made some big claims about greenhouse gas emissions in recent years. But as the rise of artificial intelligence creates ever bigger energy demands, it’s getting hard for the industry to hide the true costs of the data centers powering the tech revolution.

According to a Guardian analysis, from 2020 to 2022 the real emissions from the “in-house” or company-owned data centers of Google, Microsoft, Meta and Apple are likely about 662% – or 7.62 times – higher than officially reported.

Amazon is the largest emitter of the big five tech companies by a mile – the emissions of the second-largest emitter, Apple, were less than half of Amazon’s in 2022. However, Amazon has been kept out of the calculation above because its differing business model makes it difficult to isolate data center-specific emissions figures for the company.

As energy demands for these data centers grow, many are worried that carbon emissions will, too. The International Energy Agency stated that data centers already accounted for 1% to 1.5% of global electricity consumption in 2022 – and that was before the AI boom began with ChatGPT’s launch at the end of that year.

AI is far more energy-intensive on data centers than typical cloud-based applications. According to Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data center power demand will grow 160% by 2030. Goldman competitor Morgan Stanley’s research has made similar findings, projecting data center emissions globally to accumulate to 2.5bn metric tons of CO2 equivalent by 2030.

  • Snot Flickerman
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    The better question is if anyone actually fucking cares? It sure seems like all those climate pledges are just out the window in favor of more money now. They’re not worried about keeping up the ruse. The people that run this world feel like they’ve given up on caring about the future at all and want to strip-mine the entire planet of value before they die.

    Like those fucking Effective Altruism people who act like they are but are hiding sinister intentions saying “it doesn’t matter how many people we kill now as long as we achieve this great thing of an AI God who will save us from ourselves.”

    Like, believing humans are so special that we can create an AI God that is smarter than us is somehow dumber than believing in the God’s of the ancient world. At least the ancient Gods weren’t entirely wrapped up in existing because of human hubris.

    The rest of humanity isn’t asking for that or being asked what their opinion on that is, it’s just full steam ahead for the fucktards who think they’ve already figured out the future. They’re about as clever as the people who claimed human flight wasn’t possible, just because they’re going extreme the other direction from skepticism to belief doesn’t make them smart.

    • @tee9000@lemmy.world
      link
      fedilink
      02 months ago

      Approaches in good faith*

      Why do you think its so crazy to create an ai god, as you put it (agi)? Can i ask your background on that topic? Im a dumb new software engineer and i use LLMs. They fascinate me with the potential for mass accessible education and quantifying huge amounts of information for new insight.

      I know some tech companies are investing a lot into energy solutions because the energy problem seems very real. I agree the acceleration of resource usage seems pretty crazy. Are their any opinions from top minds in the industry that raise similar concerns? Between uneducated memeing doomers on here, and self interested companies using it, its hard to objectively talk about ai. Isnt it kind of like… its here and not going away, and we should be on the cutting edge to not let bad actors capitalize with ai (fraud is crazy rampant since ai).

      • @PoopingCough@lemmy.world
        link
        fedilink
        English
        6
        edit-2
        2 months ago

        For me it’s because I’m not convinced LLMs are really a stepping stone to any actual AI. They don’t have educational applications imo because there isn’t any way they can separate truth from fiction. They don’t understand the words that they output; they’re just predictive text generators on a huge scale. This isn’t something that can change with better tech either; it’s baked in to the very concept of an LLM. And worse, when they are wrong there’s no way to tell without already knowing the answer to the questions you’re asking. They’re literally just monkeys with typewriters. This is an extremely good article about the kinds of problems I’m taking about.

        • @tee9000@lemmy.world
          link
          fedilink
          -22 months ago

          Thanks. I sped read and hope its okay if i raise some quick thoughts.

          I thought it was interesting how it mentioned LLMs arent a mind that is formed in nature. I would offer a dumb conjecture that agi, while a mind, might still need an LLM as a component to actually handle the amount of data of a society. Like you said, LLMs are useful if you know the answer or at least suspect when to revisit a result. Maybe we are missing the biggest piece of agi, but handling data is really important and this still benefits us right? I think we will need more than a mind from our local nature to create god.

          Im a pretty skeptical person. When i used chatgpt i was pretty blown away and wouldnt say i was leaning into the idea that it was sentient. I just saw an incredible new tool, and through using it, now understand the pitfalls and can get awesome results that would have never been achieved with googling in the amount of time i spent. Most all of the heavy lifting i have it do i immediately verify through testing and its correct often enough to realize huge gains over googling and my local library etc…

          I think the criticisms of LLMs and their capability arent inaccurate but maybe short sighted? I think criticsms should currently focus on its performance for how we are using it… not how laymens might imagine using something they dont understand. Ultimately any use cases should be heavily tested and perform more accurately than human counterparts (where we are talking about replacement of humans anyway). If we dont find the gains from those applications to validate the power use… or whatever… then we are always capable of recognizing that.

          But i think its 100% valid to push back against idealized predictions but i also think shits gonna get crazy. I think theres a lot to be gained, and i question why LLMs cant be a stepping stone to greater computing milestones even if LLMs themselves aren’t a component of agi in the end.

          What im trying to be convinced of is the criticisms arent as overblown as the hype.