Not sure if that' more than a person sitting in the office writing this when you take into account all electricity used by the building, maintenance, other people (catering cleaning etc.)
And I think this energy-shaming whether that's AI crypto or something else is just a wrong approach. If somebody buys energy he can use it for mindless entertainment if he so pleases, why focus on datacenters and not Disneyland?
I think the idea behind energy shaming is that energy price rarely includes its externalities (probably resources depletion and pollution). "If You buy it, You do what You want with it" is of course legally correct. But I think the argument they're trying to make subconsciously in is that there's someone else who will have to pay part of the price for it.
But if you're a conservative cynic about Disneyland, people will just label you as some old sad geezer with outdated opinions yelling at kids playing in the garden. You have to hate on something new and trendy to keep up your consumer focus group engagement metrics these days.
Energy shaming isn’t a conservative schtik. The OP is right, why are we picking and choosing? Shame them all. Disney, Six Flags, Universal Studios, Sea World, Bush Gardens
[citation needed] Nuclear Power plants who use river water in their cooling cycle and pump the (now heated) water back to the river don't pay for that.
The cooling system illustrated in the article is very very similar to a typical nuclear power station cooling system. From the heat exchanger onwards, the secondary loop is functionally identical, the main difference is the primary loop working fluid (and that loop operates at a far lower temperature and pressure).
If "energy-shaming" is a bad practice, why start by mentioning energy consumed by the author of the article?
I think we can leave Disneyland aside and talk about the content of the article, and whether AI companies should receive the gentle treatment they have received: premium access to water, power, and money. Sure, they can buy it, but what about the people who have to absorb the externalities?
So how do you suggest we act in a world where energy usage ultimately has to be limited? Just let the market regulate that?
Then you end up with luxury villas with swimming pools in the desert while farm land dries out and people die of thirst. If you grow a system against stiff bounds you will get hard saturation effects. And that means you can't have it all in such an environment.
Now using that drought-example I think it is reasonable to say your pools have to stay empty, priority 1 is drinking water and priority 2 is farm land. And if you are the type that would like to fill a pool in that hypothetical situation you probably deserve to be shamed.
Claude estimates the average American consumes about 3800 liters of water per day counting all the food inputs (Claude only drank half a cup for this answer).
Well, if that's the case, then the situation is worse than I imagined. I thought that training the model would be far more expensive computationally than using it to compute an 'answer'.
Training is much more computationally intensive than computing a _single_ answer. But not compared to running the model over time at scale for millions of users.
Sometimes the reactionary rules make sense to try scaling down: electronics on standby can’t use more than 1W, by law in EU and UK. Consumers were right to wonder: what was my TV doing that needed more than 1W?
Why would that matter? Every system has inefficiencies. For example: Person sitting in office, trained for years, kept alive and breathing, being paid for by their employer — that is not writing that mail their boss is waiting for, but instead arguing about the resources used by ChatGPT.
Why wouldn't it matter? If ChatGPT produces words at roughly the same energy efficiency as human writers, but most of those words are unnecessary and would not have been produced if it wasn't for ChatGPT, it's worthwhile to discuss the energy cost.
You know what’s also not housing someone? Office space. This feels like the ultimate waste - massive empty buildings in prime central locations with plentiful public transit designed to house humans during the day in addition to their own houses - which are being pushed further and further away from the central locations because of the office space - that they’re forced to commute long distances against their will to occupy simply because management wants them to. I’d rather have a data center.
I’ve used ChatGPT to teach myself several programming languages, guided me through learning how to design circuits for my own projects, among many other uses. Professionally I use LLMs to scale processes for detecting fraud, financial crimes such as money laundering and sanctions evasion, amongst other things at a much greater precision and recall than other models and human investigators at a much larger scale (we still use those other vectors as well but the major break through for quality and scale was LLM hierarchical agents). We’ve done the analysis and it’s demonstrably improved our business and made the financial system in our corner safer and more stable. So, I dunno my friend. I think you’ve perhaps got a bias based on your inexperience ?
I think the question is how efficient is ChatGPT in providing economic value. I don't want to get in a game of anecdotes because I don't want to deny that there are use cases well-suited for LLMs and you might be in such a field. The efficiency question is thus: of all the queries LLMs serve, how many of those are hallucinations (which then need to be reprompted), how many of those are simple queries that could've been solved with more traditional search methods, and how many of those are queries which only LLMs could've solved?
I don't really have an answer to my own efficiency question and will likely never will. But for every company like yours, there are companies who blindly introduce LLMs into their product for no other reason than hype. Spotify, your recommendations ("DJ") are no better because you used LLMs; collaborative filtering worked just fine for music. Meta/FB, your value prop to me as a social network hits rock bottom if my friends are not even sharing real-life experiences. Google, you're already on thin ice with all the SEO spam but your value decreases even more if you give me hallucinations for summaries. Hence, sorry but my outlook isn't rosy.
The thing is, at a certain point, economics is a bitch. Either efficiency improves or those uses stop. I’d note that inference is much cheaper than training and has become radically more efficient rapidly. There’s good reason to believe the current wild eyed training boom we see now will collapse on itself in terms of cost to benefit, or will curtail to the economic value.
However you need to understand that what you said is wrong. Many of these things can’t be solved with a simple query because a query returns a blob of data and the human has to search for the information in a set of documents possibly containing it.
The UX of “I have a question” and the answer of “ok here’s a likely direct answer that’s comprehensive and comprehensible to which you can ask any follow up” in unbeatable. It’s what AskJeaves.com was as one of the first search engines but couldn’t achieve because NLP was so shitty. The fact it might hallucinate - which is very unlikely in a frontier model on a pedestrian question about common knowledge that can likely be answered by a search query since it’s training set likely contains a lot of examples of the answer and it can supplement with a basic RAG against a traditional search engine - is not even that relevant because unless you’ve not used modern search engines recently it’s almost impossible to get a straight answer from search due to SEO.
So the reality to your rhetorical question is “every single one of those questions only an LLM could have solved.” Because they’re the first real NLP system we’ve invented that can give coherent answers to the question.
Very few search engine queries are seeking documents. Most are seeking an answer to a question. A search engine has been the best we could do until a few years ago to answer questions and the user experience is fairly shit and rapidly deteriorating due to economics. It’s like asking a professor a question and being answered with a bibliography but over time realizing the professor is being paid under the table to stuff the bibliography with advertisements and the rest of the bibliography was replaced by other ads pretending to be source material and the professor just looked at the title to select them. Not only is the answer not an answer - it’s a list of stuff to read which might contain the answer if you read it all - it’s increasingly unlikely the answer is even there at all. You can argue “but the professor answering with a bibliography ensures you learn a lot!” Which is fair but people usually just want a direct answer and would sooner go to the cool professor down the hall who answers your question and gives the bibliography as citations (as modern frontier LLMs like ChatGPT and Claude do).
The fact the cook professor dabbles in mushrooms too much during office hours is unfortunate and the school administration is working on that. But it’s useful enough people literally don’t care.
For use cases where the cost >> value like collaborative filtering - well, fine. LLMs won’t be used there because the value doesn’t hold. We are only two or three years into this and there’s a lot of just dumb stuff because no one knows what will stick. And LLMs will not be the answer to everything. But I think they’re going to be increasingly more powerful simply because we will defer the right algorithm to the problem and the LLMs will be the glue of language and abductive reasoning that makes something useful, such as information retrieval, usable. And that alone is more valuable than Google’s search engine. And that is not the only use - as I’ve seen directly. So yes LLMs won’t be grand masters at chess beating alpha blue. They don’t have to be. They can be the user interface to Alpha blue and alpha blue becomes more usable immediately and much more valuable.
The hardest problem in CS isn’t cache coherency or naming things or off by one errors it’s making useful things usable, and among all the other useful things LLMs have done, they have absolutely solved the hardest problem of all - making all the useful things usable.
I agree - it has no direct use to the wellbeing of humans - Id say if the indirect use (e.g. humans saving time [which itself is probably the most energy intensive thing, food, housing, entertainment], gaining knowledge has to surpass the costs in Order for the investment to make sense.
No one is 'gaining knowledge'. LLMs are just black-box tools that compute pseudo answers based upon a somewhat arbitrary set of training data. It's a crapshoot that looks good like ELIZA looks good.
Me: Please tell me an interesting fact I don't know.
GPT: Did you know that octopuses have three hearts and their blood is blue? Two of their hearts pump blood to the gills, while the third pumps it to the rest of the body. Interestingly, when an octopus swims, the heart that supplies blood to the body actually stops beating, which is one reason they prefer crawling to swimming—it’s less stressful on their system! Their blue blood is due to hemocyanin, a copper-based molecule that is more efficient than hemoglobin in cold, low-oxygen environments.
Me, after researching and corroborating each claim, which you should always do for any source: Wow, I didn't know some of that! Thanks for sharing!
---
This was no different than googling "interesting facts about octopuses" and skimming the first few links. And if I'd done that, you wouldn't be claiming that I didn't "gain knowledge". But by all means, commence the mental gymnastics that "prove" I haven't gained knowledge. Perhaps because I had to corroborate the information? No, you should always do that for anything. But I'm eager to hear your explanation for what really happened.
That's a lot of energy used to save you going through the octopus section of Wikipedia.
If only there was a test to find out if the person using it was a moron. Then it might be useful. Of course, Dunning-Kruger dictates that no one using such a system would ask such an important question. Also, the world is obviously well short of the necessary training data.
I've got yer real intelligence right here, dude. Ask away. I even know how to tell you that your question is a waste of time.
Amazing, you chose to move goalposts. No longer are you bothering to defend your original claim that no knowledge was gained. But now, your claim is "you could have just used another source"! That's great, thank you.
I can't tell if your deliberately pretending like I didn't give you a toy example or if you truly think I sit around querying animal facts all day.
> I even know how to tell you that your question is a waste of time.
The only waste of time has been attempting to engage with you intellectually, when all you're really interested in doing is "proving" that you're right. But if you're just going to keep moving goalposts we can just end this discussion here.
But this article wants to talk about unnecessary office cooling without talking about unnecessary offices. If this is such a big deal, then why aren’t office parks springing up in the arctic?
If you've experienced this life altering change and it's reproducible, you could show up a lot of AI critics with specifics. How have LLMs expanded your ability to reason, for example?
At the risk of sounding negative... The thing is I have tried interfacing with skeptics, and most of them are just run-of-the-mill closeminded, have their minds made up before you even begin and will just argue in circles until it simply isn't worth it.
Meanwhile, a decent portion of my own peers echo my sentiment, mainly programmers. I don't have to argue with them. Instead I can analyze and improve methodologies with them, collaborate, and generally have positive, stimulating and thought-provoking conversations. You can imagine which of the two kinds of people I typically choose to engage with. Especially on a forum like HN, where meandering circular arguments are discouraged. Maybe the happy medium is just blogging about it. I really do want to blog and I'm trying to learn about it and make more time to write, but HN does already scratch a lot of that itch for me.
All in all... the only self-proclaimed AI critics left at this point are people who have either chosen to be deliberately ignorant about the benefits of this technology, or who lack the depth of skill or interest to find useful things for these models to do for them.
I've taken on increasingly ambitious projects because I now have little programmable assistants embedded into the UX of various tools that I work with. They're not perfect, sometimes we argue, sometimes I just have to turn them off for a few hours, roll up my sleeves and work through the muck. But at a high level, the benefits are huge. Modern adequately-trained LLMs are amazing at project scaffolding, architecture, design, documentation and generally being a backboard for my thoughts when I have no one else around to converse with. My typical day is highly varied and while I have specialist friends, none of them match the generalist abilities of some LLMs.
I am generally skilled enough with informatics to know when I need to corroborate or discard information (basic research skills learned independently of LLMS) and so "what if it's wrong / how will you know?" has always been an irrelevant, false paradox parroted by people lacking those basic skills which should have been taught in school. These same people already struggle with finding accurate information with search engines or libraries, it's no wonder they struggle with something like an LLM which can be very confidently wrong. With programming, it's easier. You can keep the model focused, break down tasks into tiny pieces, solve and write tests for them one by one. It encourages you to write clean, modular, self-documenting code.
With other topics like science, art and mathematics, I'm still consistently impressed by 4o and o1's capabilities, despite glaring shortcomings. I know you want very specific instances but I'm really ADHD and so I am not exaggerating when I say my conversations with LLMs are highly multidisciplinary and before a just a couple years ago, I had a hard time organizing some of these more esoteric and complex systems in my head. My typical conversations would probably be of little use to someone.
I'm now working in parallel on several scientific and mathematical inquiries, as well as a few ambitious engineering projects. My time spent researching has greatly reduced, as I can get up to speed very quickly by dumping some articles into an LLM conversation and asking it detailed questions, asking it to provide thought models, etc., getting maximal value out of the information by immediately dialing into areas of interest.
Essentially... I've always felt naked without a phone or computer around to google the random questions and ideas I have throughout the day, and I consider the internet and search engines integral to who I am and the knowledge and skill I've attained. Now, instead of a search engine, I feel naked if I don't have a well-trained chat model around. I have developed a similar dependence as I have to search engines (I panic when I have a question, slap my leg and realize my phone isn't on my person) but the tradeoff is that I feel my thinking and doing has been augmented.
And even if transformers/LLMs hit a dead end in our lifetime... after all, many of these techniques are rooted in ideas from the 60s-90s which simply "didn't work" at the time, only to become relevant again after sufficient compute is accessible... I am eager to see what the next 40-odd years of technology brings us.
Regarding reproducibility, I think it would be extremely beneficial for people to be shown how AI can help them with day-to-day things, but I have a hard time recommending LLMs to people who I know aren't equipped with enough research skill to avoid harming or hindering themselves from incorrect advice. And my experience with these models is mainly academic and related to engineering, so I just don't have much to offer normies, but I'm sure other people have made lots of ground on that.
Personally I think chat models in their current form just aren't natural or functional enough to appeal to the average person. I think the average person will see the biggest benefits from tooling built around modern models. I'm working on one right now, a grid-based component system which essentially lets you create tiny little tools that can speak with each other, and orchestrate them into one or more domain-specific UIs. Users can share and adapt apps or individual components, or create production-ready in-house creative or business tooling. Social productivity programs like this, as well as calendars, sheets, etc (all buildable within my app in minutes with a handful of stock components, with built-in multi-user support) will unlock the full power of advanced natural language models. Hoping to debut it sometime next year.
Yeah, I'm personally not a huge fan, but it's not nearly as bad as what any gambling destination is. That is nearly pure waste, with the possibility that people are seriously damaging themselves with addiction and alcohol.
There is a force inside us that seeks to convince us to undermine our own selves.
I really am not. I just drank 1 liter of water. ChatGPT consumed several orders of magnitude less doing much more than me. I also just heated up my meal, with meat and fries. ChatGPT didn't need that. All of that while walking around my apartment full of various "useless" stuff. ChatGPT doesn't care.
I am. That's amortized over millions of users and billions of invocations.
But who cares? I'd gladly pay for something to do in minutes what would've taken me hours. It doesn't need to be as efficient as my body in the same time frame.
So do coal-fired power plants. So does our endless production of plastic waste and our ever-growing landfills. So does all our fossil-fuel use.
Where is it all taking us, friend? Most people are not honest enough with themself to understand the bigger picture, nor are they selfless enough to give a darn to sacrifice something of their own benefit to help the whole.
This anti-compute crusade you're on is very strange. Data centers are not why the planet is burning up. Maybe you should be less patronizing of others' intelligence and do a little research of your own so that your arguments will be more rooted in fact and not feelings.
~1-3% of global energy consumption is a paltry price to pay for what digital technology can offer us. Do we waste a lot of it clicking ads and watching Netflix? Yeah, a lot of people do, and I take issue with that, not... data centers.
...You know there's some options between "capitalism optimising for profit margins leads to unchecked consumption of resources" and "everyone has to die", right?
1. ChatGPT is in the title purely for click-related reasons. This is about data center water usage, and applies to any application that uses a data center.
2. Seems like water should be more expensive in water-stressed areas and that data centers should pay a rate that includes the cost of the externality. Is it, and do they?
1. Yeah, let’s talk Microsoft Teams instead. I suspect that’s worse than AI training…
2. The article I’d write starts with incumbent uses of water. Residential use, especially in dry areas, has been pressured to progressively increase efficiency. It does take energy to treat and pump and treat water again. But some people always wonder if they’re saving water just so another part of town can add residents or sell water to a new data center that runs Microsoft Teams.
Evaporative cooling shouldn’t pay the (embedded) full price of treating sewage. For example, in California, that’s charged at a fraction. But it still covers the cost of treatment, the administration of water rights, and pumping. And that’s enough for corporations to choose to put AI training in other places.
> talk Microsoft Teams instead. I suspect that’s worse than AI training…
If we assume that 25% of daily compute of 200 million people is spent on Electron and similar bloated software, and they're all using 50W computers (splitting the difference between laptops and desktops), that's about 7TWh per year: the annual electricity usage of Mongolia, or roughly 1% of Japan.
Which is actually less than I had thought, but it also doesn't include the server-side resources.
How about T-shirts? Websites, ads and "fun fact" news articles tell me it takes on the order of 10000 liters for each item... which seems to be a bit high but I don't have the means to verify the number.
Most people eat avocados not for their nutrition but because they are delicious. I would definitely suggest not buying and eating them if you don't find them delicious! There are much cheaper options.
Does Claude offer much beyond its utility as a tool? I mean, it's ok to like tools too. But I'm not sure I'd want to pay an ongoing cost for a tool just because I enjoy using it. It has to deliver utility too. And if it does then you shouldn't have to give up avocados, in fact you should be able to buy more!
If/when we to the Her level of AI then things will be interesting...
> Using the chatbot for between ten to 50 queries consumes about two litres of water, according to experts from the University of California, Riverside.
From ChatGPT:
> A single BTC transaction consumes approximately 1,000–1,500 kWh of electricity, with cooling processes for fossil fuel and nuclear power plants using about 2–3 liters of water per kWh. This translates to an indirect water consumption of around 2,500–3,750 liters (660–990 gallons) per transaction, depending on the energy mix.
It looks like you are using ChatGPT too much and feeding yourself untrustworthy information for no reason.
Internet search still exists, there is no need to use ChatGPT for this. Simply typing "how much water per btc transaction" into Google will give you this top response much faster than ChatGPT would:
> Buying or selling bitcoin uses 16,000 litres of clean water for every single transaction.
This is reported by New Scientist, who's reporting is based on actual calculations (using real data) from a PHD candidate at the Amsterdam School of Business and Economics.
Whereas the number given by ChatGTP is based on nothing but the statistical probability of what would sound correct in this context, and has no guarantee of accuracy at all. In this case it was off by an order of magnitude compared to the number based on real data.
The internet search probably used a lot less water as well (beside being both faster and more trustworthy).
How is it a “consumption” (in either case) if the water is basically just running through a pipe and released a couple miles down the line/evaporated into the atmosphere without being polluted?
More data-center demand means more energy and water demand. Can this in turn incentivize in-situ water reclamation/desalination and energy generation (solar, nuclear), driving down prices for both? Maybe one day, data centers will crystalize cities around them - like mono-cities of USSR, but way more robust. You can even think of these as oasis of some sort!
Every time they announce "new datacenter to bring 4000 jobs", a large proportion of those are in the construction, with only a small fraction being operational staff.
Globally (roughly), there are around 2 million datacenter staff, and 8000 datacenters. 250 people in a huge physical site isn't a good start for a city.
> Many data centres use water-based systems to cool the plant with towers evaporating the heat, like a huge perspiration system, which means that the water is lost.
Someone doesn't know what "evaporation" or "lost" means.
Total datacenter energy use is on the order of 1PWh/yr. Total global solar surface irradiance is on the order of 400,000PWh/yr.
The direct heat contribution is negligible to global temperature.
There is an argument that water vapour in the atmosphere is a greenhouse gas, but also an argument that clouds reflect solar energy, and that this water vapour is emitted at ground level. I don't think it's so obvious that water evaporation from datacenter cooling systems is directly either good or bad for global heating.
There certainly are negative environmental and social effects of this water usage in some places and implementations, and certainly it could include severe heating potential, but this is an over-reduction without further context.
Yeah, but a group of people like golf courses more than a larger group of people like artichokes. We have a civilized arrangement of sharing that doesn’t make those groups fight in a cage. But in places where this matters, golf courses will always come up before your point.
Still in principle in places like say the arab states where you could stick a lot of solar in the desert, and there's a need for desalination it could work in principle. Though I think they still tend to use oil.
> Many data centres use water-based systems to cool the plant with towers evaporating the heat, like a huge perspiration system, which means that the water is lost.
> All of these companies have schemes to put water back into nature using projects that help river flow, capture rainwater, recharge aquifers and modify dams. They have all pledged to become “water positive” by 2030: returning more than they consume.
> But where are they putting back the water? Quite often it’s not in the same place it was taken out, which can be in areas of “water stress”.
It's not about whether or not the water is gone for good after cooling. It's about allocation of resources. The water cooling ChatGPT servers right now is water not delivered to households or the larger ecosystem. Data centers are, in effect, in competition with the rest of us for who gets to use the water right now.
Please don’t be that person quoting the rules when you don’t understand the arguments.
This hit piece from someone whose job is directly endangered by LLMs is in the same absurd category as the similar articles about how AirPods contribute to e-waste. just look at them! They’re tiny! Or the breathless editorials blaming Elon for destroying the environment with the damage done to the launchpad by his rocket.. a one time incident that threw a few chunks of concrete on a beach and killed not a single animal.
In a way the more absurd and outlandish the claim the more it is circulated by people that want to stop something they’re scared of, usually in the sense of “this is bad for my career”.
“Windmills kill birds, didn’t you know?!” — someone working for the petrochemical industry.
Write down two numbers before replying: the gigalitres of water used industrially in a similar manner — don’t forget to include literally every skyscraper’s HVAC system that uses evaporative cooling — and then the liters used by ChatGPT.
Divide the two numbers.
Then, and only then, try to make a coherent argument about why ChatGPT is specifically a worse water-usage problem than, say, the rest of the Azure data center where it’s hosted that the NY Times hack who’s about to be replaced by a bot strangely had no problem with.
"Industrial activity requires inputs" is not news, not even remotely. We don't have Star Trek replicators and free energy.
"Data centre uses water for cooling" is also not news, not in the slightest, in any way, shape or form. It's how it's done. It's how it's been done for decades, at a huge scale, but that scale is absolutely dwarfed by... checks notes... everything else. Literally just the leaks in municipal water pipes use more water than a data centre.
"ChatGPT uses a lot of water" is either a hit-piece, or a failed journalist desperately trying to make a non-story into something that'll get eyeballs.
I guess they succeeded.
PS: That, or they're simply innumerate. The general population is! Teraliters, gigaliters, megaliters, they all sound big, ya know?
PPS: Some numbers! Sydney Water lost 25,700 megaliters to leaks in that one city per year. A "hyperscaler" data centre uses up to 750 megaliters per year for cooling. Azure uses maybe 10% of their cooling capacity for ChatGPT, but I suspect it's a lot less. That's... what.. 75 megaliters per year per data centre? Water pipe leaks are wasting 340x as much as AI compute, which is a useful software product. Water leaks are pure waste. https://www.smh.com.au/national/nsw/nine-per-cent-of-sydney-... and https://dgtlinfra.com/data-center-water-usage/
Totally agree. Both water and electricity have fugitive problems. Sydney has an interesting application: capturing an iceberg and towing it to one part of the harbor. At that point, waste heat from a data center could be useful in melting it into the water system.
Water has better energy transfer characteristics than air, especially during phase changes like evaporation and condensation. So it’s less about scale per se unless you mean scale of energy transfer per unit of time.
All that to get some percent hallucination, whose logic can't be traced? It's just another tech dead-end that leaves us all the worse, and the haves having more. It's just more waste on the level of bitcoin. It's the nature of selfishness: shortsighted with no global benefit, only global waste.
Compassion for the Earth, herself, dictates we forge a different path, where folks like Google, Facebook, and elmo are not deciding our collective fate.
Not sure if that' more than a person sitting in the office writing this when you take into account all electricity used by the building, maintenance, other people (catering cleaning etc.)
And I think this energy-shaming whether that's AI crypto or something else is just a wrong approach. If somebody buys energy he can use it for mindless entertainment if he so pleases, why focus on datacenters and not Disneyland?
I think the idea behind energy shaming is that energy price rarely includes its externalities (probably resources depletion and pollution). "If You buy it, You do what You want with it" is of course legally correct. But I think the argument they're trying to make subconsciously in is that there's someone else who will have to pay part of the price for it.
Can be true, but it applies to Disneyland as well.
But if you're a conservative cynic about Disneyland, people will just label you as some old sad geezer with outdated opinions yelling at kids playing in the garden. You have to hate on something new and trendy to keep up your consumer focus group engagement metrics these days.
Which makes you seem insincere.
Energy shaming isn’t a conservative schtik. The OP is right, why are we picking and choosing? Shame them all. Disney, Six Flags, Universal Studios, Sea World, Bush Gardens
Water use is not an externality. All water used is paid for.
[citation needed] Nuclear Power plants who use river water in their cooling cycle and pump the (now heated) water back to the river don't pay for that.
That's not the kind of water use the article is complaining about.
The cooling system illustrated in the article is very very similar to a typical nuclear power station cooling system. From the heat exchanger onwards, the secondary loop is functionally identical, the main difference is the primary loop working fluid (and that loop operates at a far lower temperature and pressure).
All water use includes that one right?
The argument is I think that all unnecessary energy expense should be shamed equally, or not at all.
If "energy-shaming" is a bad practice, why start by mentioning energy consumed by the author of the article?
I think we can leave Disneyland aside and talk about the content of the article, and whether AI companies should receive the gentle treatment they have received: premium access to water, power, and money. Sure, they can buy it, but what about the people who have to absorb the externalities?
So how do you suggest we act in a world where energy usage ultimately has to be limited? Just let the market regulate that?
Then you end up with luxury villas with swimming pools in the desert while farm land dries out and people die of thirst. If you grow a system against stiff bounds you will get hard saturation effects. And that means you can't have it all in such an environment.
Now using that drought-example I think it is reasonable to say your pools have to stay empty, priority 1 is drinking water and priority 2 is farm land. And if you are the type that would like to fill a pool in that hypothetical situation you probably deserve to be shamed.
Claude estimates the average American consumes about 3800 liters of water per day counting all the food inputs (Claude only drank half a cup for this answer).
You did not count the gathering of data and then training the LLM.
Inference consumes far more total compute than training the model.
By 'inference', do you mean using the model to compute a specific series of results?
Yes, training is a tiny amount of usage, this is well-known.
Well, if that's the case, then the situation is worse than I imagined. I thought that training the model would be far more expensive computationally than using it to compute an 'answer'.
Thanks.
Training is much more computationally intensive than computing a _single_ answer. But not compared to running the model over time at scale for millions of users.
Thanks. That is precisely how I intuited the situation. 40+ years of programming FTW!
Agreed. If it's useful, why not scale up the electricity and water supply, and make the latter sustainable.
Sometimes the reactionary rules make sense to try scaling down: electronics on standby can’t use more than 1W, by law in EU and UK. Consumers were right to wonder: what was my TV doing that needed more than 1W?
That's a great argument! Assuming every word generated by ChatGPT is a word which would otherwise have been typed by a human writer.
Is every word generated by ChatGPT a word which would otherwise have been typed by a human writer?
Why would that matter? Every system has inefficiencies. For example: Person sitting in office, trained for years, kept alive and breathing, being paid for by their employer — that is not writing that mail their boss is waiting for, but instead arguing about the resources used by ChatGPT.
Why wouldn't it matter? If ChatGPT produces words at roughly the same energy efficiency as human writers, but most of those words are unnecessary and would not have been produced if it wasn't for ChatGPT, it's worthwhile to discuss the energy cost.
ChatGPT is not housing anyone, nor is it providing office space. Even Disneyland is at least providing joy for children and adults.
You know what’s also not housing someone? Office space. This feels like the ultimate waste - massive empty buildings in prime central locations with plentiful public transit designed to house humans during the day in addition to their own houses - which are being pushed further and further away from the central locations because of the office space - that they’re forced to commute long distances against their will to occupy simply because management wants them to. I’d rather have a data center.
I’ve used ChatGPT to teach myself several programming languages, guided me through learning how to design circuits for my own projects, among many other uses. Professionally I use LLMs to scale processes for detecting fraud, financial crimes such as money laundering and sanctions evasion, amongst other things at a much greater precision and recall than other models and human investigators at a much larger scale (we still use those other vectors as well but the major break through for quality and scale was LLM hierarchical agents). We’ve done the analysis and it’s demonstrably improved our business and made the financial system in our corner safer and more stable. So, I dunno my friend. I think you’ve perhaps got a bias based on your inexperience ?
I think the question is how efficient is ChatGPT in providing economic value. I don't want to get in a game of anecdotes because I don't want to deny that there are use cases well-suited for LLMs and you might be in such a field. The efficiency question is thus: of all the queries LLMs serve, how many of those are hallucinations (which then need to be reprompted), how many of those are simple queries that could've been solved with more traditional search methods, and how many of those are queries which only LLMs could've solved?
I don't really have an answer to my own efficiency question and will likely never will. But for every company like yours, there are companies who blindly introduce LLMs into their product for no other reason than hype. Spotify, your recommendations ("DJ") are no better because you used LLMs; collaborative filtering worked just fine for music. Meta/FB, your value prop to me as a social network hits rock bottom if my friends are not even sharing real-life experiences. Google, you're already on thin ice with all the SEO spam but your value decreases even more if you give me hallucinations for summaries. Hence, sorry but my outlook isn't rosy.
The thing is, at a certain point, economics is a bitch. Either efficiency improves or those uses stop. I’d note that inference is much cheaper than training and has become radically more efficient rapidly. There’s good reason to believe the current wild eyed training boom we see now will collapse on itself in terms of cost to benefit, or will curtail to the economic value.
However you need to understand that what you said is wrong. Many of these things can’t be solved with a simple query because a query returns a blob of data and the human has to search for the information in a set of documents possibly containing it.
The UX of “I have a question” and the answer of “ok here’s a likely direct answer that’s comprehensive and comprehensible to which you can ask any follow up” in unbeatable. It’s what AskJeaves.com was as one of the first search engines but couldn’t achieve because NLP was so shitty. The fact it might hallucinate - which is very unlikely in a frontier model on a pedestrian question about common knowledge that can likely be answered by a search query since it’s training set likely contains a lot of examples of the answer and it can supplement with a basic RAG against a traditional search engine - is not even that relevant because unless you’ve not used modern search engines recently it’s almost impossible to get a straight answer from search due to SEO.
So the reality to your rhetorical question is “every single one of those questions only an LLM could have solved.” Because they’re the first real NLP system we’ve invented that can give coherent answers to the question.
Very few search engine queries are seeking documents. Most are seeking an answer to a question. A search engine has been the best we could do until a few years ago to answer questions and the user experience is fairly shit and rapidly deteriorating due to economics. It’s like asking a professor a question and being answered with a bibliography but over time realizing the professor is being paid under the table to stuff the bibliography with advertisements and the rest of the bibliography was replaced by other ads pretending to be source material and the professor just looked at the title to select them. Not only is the answer not an answer - it’s a list of stuff to read which might contain the answer if you read it all - it’s increasingly unlikely the answer is even there at all. You can argue “but the professor answering with a bibliography ensures you learn a lot!” Which is fair but people usually just want a direct answer and would sooner go to the cool professor down the hall who answers your question and gives the bibliography as citations (as modern frontier LLMs like ChatGPT and Claude do).
The fact the cook professor dabbles in mushrooms too much during office hours is unfortunate and the school administration is working on that. But it’s useful enough people literally don’t care.
For use cases where the cost >> value like collaborative filtering - well, fine. LLMs won’t be used there because the value doesn’t hold. We are only two or three years into this and there’s a lot of just dumb stuff because no one knows what will stick. And LLMs will not be the answer to everything. But I think they’re going to be increasingly more powerful simply because we will defer the right algorithm to the problem and the LLMs will be the glue of language and abductive reasoning that makes something useful, such as information retrieval, usable. And that alone is more valuable than Google’s search engine. And that is not the only use - as I’ve seen directly. So yes LLMs won’t be grand masters at chess beating alpha blue. They don’t have to be. They can be the user interface to Alpha blue and alpha blue becomes more usable immediately and much more valuable.
The hardest problem in CS isn’t cache coherency or naming things or off by one errors it’s making useful things usable, and among all the other useful things LLMs have done, they have absolutely solved the hardest problem of all - making all the useful things usable.
I agree - it has no direct use to the wellbeing of humans - Id say if the indirect use (e.g. humans saving time [which itself is probably the most energy intensive thing, food, housing, entertainment], gaining knowledge has to surpass the costs in Order for the investment to make sense.
No one is 'gaining knowledge'. LLMs are just black-box tools that compute pseudo answers based upon a somewhat arbitrary set of training data. It's a crapshoot that looks good like ELIZA looks good.
Me: Please tell me an interesting fact I don't know.
GPT: Did you know that octopuses have three hearts and their blood is blue? Two of their hearts pump blood to the gills, while the third pumps it to the rest of the body. Interestingly, when an octopus swims, the heart that supplies blood to the body actually stops beating, which is one reason they prefer crawling to swimming—it’s less stressful on their system! Their blue blood is due to hemocyanin, a copper-based molecule that is more efficient than hemoglobin in cold, low-oxygen environments.
Me, after researching and corroborating each claim, which you should always do for any source: Wow, I didn't know some of that! Thanks for sharing!
---
This was no different than googling "interesting facts about octopuses" and skimming the first few links. And if I'd done that, you wouldn't be claiming that I didn't "gain knowledge". But by all means, commence the mental gymnastics that "prove" I haven't gained knowledge. Perhaps because I had to corroborate the information? No, you should always do that for anything. But I'm eager to hear your explanation for what really happened.
That's a lot of energy used to save you going through the octopus section of Wikipedia.
If only there was a test to find out if the person using it was a moron. Then it might be useful. Of course, Dunning-Kruger dictates that no one using such a system would ask such an important question. Also, the world is obviously well short of the necessary training data.
I've got yer real intelligence right here, dude. Ask away. I even know how to tell you that your question is a waste of time.
Amazing, you chose to move goalposts. No longer are you bothering to defend your original claim that no knowledge was gained. But now, your claim is "you could have just used another source"! That's great, thank you.
I can't tell if your deliberately pretending like I didn't give you a toy example or if you truly think I sit around querying animal facts all day.
> I even know how to tell you that your question is a waste of time.
The only waste of time has been attempting to engage with you intellectually, when all you're really interested in doing is "proving" that you're right. But if you're just going to keep moving goalposts we can just end this discussion here.
But this article wants to talk about unnecessary office cooling without talking about unnecessary offices. If this is such a big deal, then why aren’t office parks springing up in the arctic?
My entire life trajectory has changed because of how LLMs have expanded my ability to write, think and reason. Have fun at Disneyland, though.
If you've experienced this life altering change and it's reproducible, you could show up a lot of AI critics with specifics. How have LLMs expanded your ability to reason, for example?
At the risk of sounding negative... The thing is I have tried interfacing with skeptics, and most of them are just run-of-the-mill closeminded, have their minds made up before you even begin and will just argue in circles until it simply isn't worth it.
Meanwhile, a decent portion of my own peers echo my sentiment, mainly programmers. I don't have to argue with them. Instead I can analyze and improve methodologies with them, collaborate, and generally have positive, stimulating and thought-provoking conversations. You can imagine which of the two kinds of people I typically choose to engage with. Especially on a forum like HN, where meandering circular arguments are discouraged. Maybe the happy medium is just blogging about it. I really do want to blog and I'm trying to learn about it and make more time to write, but HN does already scratch a lot of that itch for me.
All in all... the only self-proclaimed AI critics left at this point are people who have either chosen to be deliberately ignorant about the benefits of this technology, or who lack the depth of skill or interest to find useful things for these models to do for them.
I've taken on increasingly ambitious projects because I now have little programmable assistants embedded into the UX of various tools that I work with. They're not perfect, sometimes we argue, sometimes I just have to turn them off for a few hours, roll up my sleeves and work through the muck. But at a high level, the benefits are huge. Modern adequately-trained LLMs are amazing at project scaffolding, architecture, design, documentation and generally being a backboard for my thoughts when I have no one else around to converse with. My typical day is highly varied and while I have specialist friends, none of them match the generalist abilities of some LLMs.
I am generally skilled enough with informatics to know when I need to corroborate or discard information (basic research skills learned independently of LLMS) and so "what if it's wrong / how will you know?" has always been an irrelevant, false paradox parroted by people lacking those basic skills which should have been taught in school. These same people already struggle with finding accurate information with search engines or libraries, it's no wonder they struggle with something like an LLM which can be very confidently wrong. With programming, it's easier. You can keep the model focused, break down tasks into tiny pieces, solve and write tests for them one by one. It encourages you to write clean, modular, self-documenting code.
With other topics like science, art and mathematics, I'm still consistently impressed by 4o and o1's capabilities, despite glaring shortcomings. I know you want very specific instances but I'm really ADHD and so I am not exaggerating when I say my conversations with LLMs are highly multidisciplinary and before a just a couple years ago, I had a hard time organizing some of these more esoteric and complex systems in my head. My typical conversations would probably be of little use to someone.
I'm now working in parallel on several scientific and mathematical inquiries, as well as a few ambitious engineering projects. My time spent researching has greatly reduced, as I can get up to speed very quickly by dumping some articles into an LLM conversation and asking it detailed questions, asking it to provide thought models, etc., getting maximal value out of the information by immediately dialing into areas of interest.
Essentially... I've always felt naked without a phone or computer around to google the random questions and ideas I have throughout the day, and I consider the internet and search engines integral to who I am and the knowledge and skill I've attained. Now, instead of a search engine, I feel naked if I don't have a well-trained chat model around. I have developed a similar dependence as I have to search engines (I panic when I have a question, slap my leg and realize my phone isn't on my person) but the tradeoff is that I feel my thinking and doing has been augmented.
And even if transformers/LLMs hit a dead end in our lifetime... after all, many of these techniques are rooted in ideas from the 60s-90s which simply "didn't work" at the time, only to become relevant again after sufficient compute is accessible... I am eager to see what the next 40-odd years of technology brings us.
Regarding reproducibility, I think it would be extremely beneficial for people to be shown how AI can help them with day-to-day things, but I have a hard time recommending LLMs to people who I know aren't equipped with enough research skill to avoid harming or hindering themselves from incorrect advice. And my experience with these models is mainly academic and related to engineering, so I just don't have much to offer normies, but I'm sure other people have made lots of ground on that.
Personally I think chat models in their current form just aren't natural or functional enough to appeal to the average person. I think the average person will see the biggest benefits from tooling built around modern models. I'm working on one right now, a grid-based component system which essentially lets you create tiny little tools that can speak with each other, and orchestrate them into one or more domain-specific UIs. Users can share and adapt apps or individual components, or create production-ready in-house creative or business tooling. Social productivity programs like this, as well as calendars, sheets, etc (all buildable within my app in minutes with a handful of stock components, with built-in multi-user support) will unlock the full power of advanced natural language models. Hoping to debut it sometime next year.
Disneyland also involves a lot of plane flights and driving to get to it. If you add all that up it's a lot more.
Yeah, I'm personally not a huge fan, but it's not nearly as bad as what any gambling destination is. That is nearly pure waste, with the possibility that people are seriously damaging themselves with addiction and alcohol.
There is a force inside us that seeks to convince us to undermine our own selves.
ChatGPT saved 3 hours of my time today.
You are far more energically efficient than a data center.
Wetware >>> software
I really am not. I just drank 1 liter of water. ChatGPT consumed several orders of magnitude less doing much more than me. I also just heated up my meal, with meat and fries. ChatGPT didn't need that. All of that while walking around my apartment full of various "useless" stuff. ChatGPT doesn't care.
You're not counting the training of the LLM.
I am. That's amortized over millions of users and billions of invocations.
But who cares? I'd gladly pay for something to do in minutes what would've taken me hours. It doesn't need to be as efficient as my body in the same time frame.
chat gpt doesn't enjoy anything for itself, it provides value to people. i don't use it much, but on occasion it's been helpful to me.
So do coal-fired power plants. So does our endless production of plastic waste and our ever-growing landfills. So does all our fossil-fuel use.
Where is it all taking us, friend? Most people are not honest enough with themself to understand the bigger picture, nor are they selfless enough to give a darn to sacrifice something of their own benefit to help the whole.
Yes, let's all kill ourselves and save resources!
Yeah, having less data centers will cost lives; it won't be lack of drinking water or living in an overheated environment!
My mistake is mistaking high karma for intelligence. Wrong again!
This anti-compute crusade you're on is very strange. Data centers are not why the planet is burning up. Maybe you should be less patronizing of others' intelligence and do a little research of your own so that your arguments will be more rooted in fact and not feelings.
~1-3% of global energy consumption is a paltry price to pay for what digital technology can offer us. Do we waste a lot of it clicking ads and watching Netflix? Yeah, a lot of people do, and I take issue with that, not... data centers.
...You know there's some options between "capitalism optimising for profit margins leads to unchecked consumption of resources" and "everyone has to die", right?
Yes, in the way that there are differences in pollution between using coal-fired plants and using LLMs.
do you live in the forest in peace with nature?
The question is how much value. Sure, learning about entropy & unblocking students is huge value - producing clickbait content does way less so imo
You don't have to click on clickbait. If you voted with your clicks, it wouldn't make them a profit.
1. ChatGPT is in the title purely for click-related reasons. This is about data center water usage, and applies to any application that uses a data center.
2. Seems like water should be more expensive in water-stressed areas and that data centers should pay a rate that includes the cost of the externality. Is it, and do they?
1. Yeah, let’s talk Microsoft Teams instead. I suspect that’s worse than AI training…
2. The article I’d write starts with incumbent uses of water. Residential use, especially in dry areas, has been pressured to progressively increase efficiency. It does take energy to treat and pump and treat water again. But some people always wonder if they’re saving water just so another part of town can add residents or sell water to a new data center that runs Microsoft Teams.
Evaporative cooling shouldn’t pay the (embedded) full price of treating sewage. For example, in California, that’s charged at a fraction. But it still covers the cost of treatment, the administration of water rights, and pumping. And that’s enough for corporations to choose to put AI training in other places.
> talk Microsoft Teams instead. I suspect that’s worse than AI training…
If we assume that 25% of daily compute of 200 million people is spent on Electron and similar bloated software, and they're all using 50W computers (splitting the difference between laptops and desktops), that's about 7TWh per year: the annual electricity usage of Mongolia, or roughly 1% of Japan.
Which is actually less than I had thought, but it also doesn't include the server-side resources.
One avocado takes 320 liters, I'd rather give up on avocado than Claude.
How about T-shirts? Websites, ads and "fun fact" news articles tell me it takes on the order of 10000 liters for each item... which seems to be a bit high but I don't have the means to verify the number.
Surely not! I could drown a whole load of avocados in 320l of water, couldn't I?
In California, you could replace all your lawn with avocado trees. Might only double your water bill.
Most people eat avocados not for their nutrition but because they are delicious. I would definitely suggest not buying and eating them if you don't find them delicious! There are much cheaper options.
Does Claude offer much beyond its utility as a tool? I mean, it's ok to like tools too. But I'm not sure I'd want to pay an ongoing cost for a tool just because I enjoy using it. It has to deliver utility too. And if it does then you shouldn't have to give up avocados, in fact you should be able to buy more!
If/when we to the Her level of AI then things will be interesting...
From the article:
> Using the chatbot for between ten to 50 queries consumes about two litres of water, according to experts from the University of California, Riverside.
From ChatGPT:
> A single BTC transaction consumes approximately 1,000–1,500 kWh of electricity, with cooling processes for fossil fuel and nuclear power plants using about 2–3 liters of water per kWh. This translates to an indirect water consumption of around 2,500–3,750 liters (660–990 gallons) per transaction, depending on the energy mix.
If accurate, I still prefer ChatGPT.
> If accurate, I still prefer ChatGPT.
It looks like you are using ChatGPT too much and feeding yourself untrustworthy information for no reason.
Internet search still exists, there is no need to use ChatGPT for this. Simply typing "how much water per btc transaction" into Google will give you this top response much faster than ChatGPT would:
> Buying or selling bitcoin uses 16,000 litres of clean water for every single transaction.
This is reported by New Scientist, who's reporting is based on actual calculations (using real data) from a PHD candidate at the Amsterdam School of Business and Economics.
Whereas the number given by ChatGTP is based on nothing but the statistical probability of what would sound correct in this context, and has no guarantee of accuracy at all. In this case it was off by an order of magnitude compared to the number based on real data.
The internet search probably used a lot less water as well (beside being both faster and more trustworthy).
How is it a “consumption” (in either case) if the water is basically just running through a pipe and released a couple miles down the line/evaporated into the atmosphere without being polluted?
We're having a global heating problem, so the water use is just a symptom of our systemic abuse of the planet's resources.
It’s irresponsibly common for journalists to use the term for once-through cooling.
But evaporative cooling consumes water just like the human body.
Here's the reality: just because a lot of people are doing something, it doesn't mean thay're not sunk-cost fools.
More data-center demand means more energy and water demand. Can this in turn incentivize in-situ water reclamation/desalination and energy generation (solar, nuclear), driving down prices for both? Maybe one day, data centers will crystalize cities around them - like mono-cities of USSR, but way more robust. You can even think of these as oasis of some sort!
> data centers will crystalize cities around them
Most datacenter run on pretty small staff sizes.
Every time they announce "new datacenter to bring 4000 jobs", a large proportion of those are in the construction, with only a small fraction being operational staff.
Globally (roughly), there are around 2 million datacenter staff, and 8000 datacenters. 250 people in a huge physical site isn't a good start for a city.
> Many data centres use water-based systems to cool the plant with towers evaporating the heat, like a huge perspiration system, which means that the water is lost.
Someone doesn't know what "evaporation" or "lost" means.
Someone doesn't believe in global heating.
Total datacenter energy use is on the order of 1PWh/yr. Total global solar surface irradiance is on the order of 400,000PWh/yr.
The direct heat contribution is negligible to global temperature.
There is an argument that water vapour in the atmosphere is a greenhouse gas, but also an argument that clouds reflect solar energy, and that this water vapour is emitted at ground level. I don't think it's so obvious that water evaporation from datacenter cooling systems is directly either good or bad for global heating.
There certainly are negative environmental and social effects of this water usage in some places and implementations, and certainly it could include severe heating potential, but this is an over-reduction without further context.
You can simultaneously believe that the energy demand for cooling is unethical and that evaporation is the best source of cooling.
apparently, when water is used for purposes that annoying people dislike for ideological reasons, it disappears into another dimension, lost forever.
Yeah, but a group of people like golf courses more than a larger group of people like artichokes. We have a civilized arrangement of sharing that doesn’t make those groups fight in a cage. But in places where this matters, golf courses will always come up before your point.
Using 40ml of water to ask chatgpt how many grains of rice 40ml of water would grow, and the answer is ~1.
If you build the datacenter by the sea or floating, you could combine the cooling with desalination
"If my aunt had a d_ck, she'd be my uncle." --Louisiana saying
I like your idea, but there's no patience or monetary motivation to postpone the energy use until more sane facilities are built.
Still in principle in places like say the arab states where you could stick a lot of solar in the desert, and there's a need for desalination it could work in principle. Though I think they still tend to use oil.
Used water doesn't go to waste water treatment plan.
The water doesn’t disappear.
Most data centres use water in their chiller units, as do many large industrial facilities.
ChatGPT is a minority of the energy, heat, and cooling budget of Azure, which is a minority of the world’s data centre footprint.
ChatGPT is useful.
This article is not.
TFA:
> Many data centres use water-based systems to cool the plant with towers evaporating the heat, like a huge perspiration system, which means that the water is lost.
> All of these companies have schemes to put water back into nature using projects that help river flow, capture rainwater, recharge aquifers and modify dams. They have all pledged to become “water positive” by 2030: returning more than they consume.
> But where are they putting back the water? Quite often it’s not in the same place it was taken out, which can be in areas of “water stress”.
It's not about whether or not the water is gone for good after cooling. It's about allocation of resources. The water cooling ChatGPT servers right now is water not delivered to households or the larger ecosystem. Data centers are, in effect, in competition with the rest of us for who gets to use the water right now.
>> ChatGPT is useful. This article is not.
https://news.ycombinator.com/newsguidelines.html please:
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Please don’t be that person quoting the rules when you don’t understand the arguments.
This hit piece from someone whose job is directly endangered by LLMs is in the same absurd category as the similar articles about how AirPods contribute to e-waste. just look at them! They’re tiny! Or the breathless editorials blaming Elon for destroying the environment with the damage done to the launchpad by his rocket.. a one time incident that threw a few chunks of concrete on a beach and killed not a single animal.
In a way the more absurd and outlandish the claim the more it is circulated by people that want to stop something they’re scared of, usually in the sense of “this is bad for my career”.
“Windmills kill birds, didn’t you know?!” — someone working for the petrochemical industry.
Write down two numbers before replying: the gigalitres of water used industrially in a similar manner — don’t forget to include literally every skyscraper’s HVAC system that uses evaporative cooling — and then the liters used by ChatGPT.
Divide the two numbers.
Then, and only then, try to make a coherent argument about why ChatGPT is specifically a worse water-usage problem than, say, the rest of the Azure data center where it’s hosted that the NY Times hack who’s about to be replaced by a bot strangely had no problem with.
What’s wrong with this perspective? Why are the scale of incumbents not part of these stories?
"Industrial activity requires inputs" is not news, not even remotely. We don't have Star Trek replicators and free energy.
"Data centre uses water for cooling" is also not news, not in the slightest, in any way, shape or form. It's how it's done. It's how it's been done for decades, at a huge scale, but that scale is absolutely dwarfed by... checks notes... everything else. Literally just the leaks in municipal water pipes use more water than a data centre.
"ChatGPT uses a lot of water" is either a hit-piece, or a failed journalist desperately trying to make a non-story into something that'll get eyeballs.
I guess they succeeded.
PS: That, or they're simply innumerate. The general population is! Teraliters, gigaliters, megaliters, they all sound big, ya know?
PPS: Some numbers! Sydney Water lost 25,700 megaliters to leaks in that one city per year. A "hyperscaler" data centre uses up to 750 megaliters per year for cooling. Azure uses maybe 10% of their cooling capacity for ChatGPT, but I suspect it's a lot less. That's... what.. 75 megaliters per year per data centre? Water pipe leaks are wasting 340x as much as AI compute, which is a useful software product. Water leaks are pure waste. https://www.smh.com.au/national/nsw/nine-per-cent-of-sydney-... and https://dgtlinfra.com/data-center-water-usage/
Totally agree. Both water and electricity have fugitive problems. Sydney has an interesting application: capturing an iceberg and towing it to one part of the harbor. At that point, waste heat from a data center could be useful in melting it into the water system.
Further, it could run on air cooling, just the water or other liquid medium is easier to handle in scale.
Water has better energy transfer characteristics than air, especially during phase changes like evaporation and condensation. So it’s less about scale per se unless you mean scale of energy transfer per unit of time.
All that to get some percent hallucination, whose logic can't be traced? It's just another tech dead-end that leaves us all the worse, and the haves having more. It's just more waste on the level of bitcoin. It's the nature of selfishness: shortsighted with no global benefit, only global waste.
Compassion for the Earth, herself, dictates we forge a different path, where folks like Google, Facebook, and elmo are not deciding our collective fate.
Unfortunately we're in a global prisoner's dilemma. Nobody wants the future we're all desperately racing towards building.
Somebody wants it, or it wouldn't be being built. No, the correct framing is that this is a case of the ignorant leading the powerless.