Here's what I tell people at work: It's OK to use AI, but you must say it's AI. If you post something and say "Here's what GPT-5 says" - great. Love the efficiency. If someone asks you to do something and you respond with clearly AI-generated crap masquerading as your own, you will be getting a piece of my mind.
I use AI mostly for writing docs and always make sure the documents have an "AI generated content" notice as the first thing readers see.
In the codebase itself I add in-line comments pointing to precisely where AI was used.
AI has proven very useful for providing extensive in-line comments too as my employer is pushing hard for our Ops guys to learn IaC despite the vast majority having zero-to-none development experience.
Contextual comments explaining _what, why & how_ loops/conditionals/etc. work has (so far anyway) proven quite successful.
It's funny how people can't help themselves, despite realizing that just around the corner is "if all you are is an interface to chatgpt, what are we paying you for exactly?".
It worries me how many people prefer using AI over doing their own thinking. How much of your life will you "live" on autopilot? Hollowing out your own soul little-by-little when you do things like that.
This is 100 times scarier, and more likely, than the "chatgpt will become skynet and nuke the world" and "Ai will replace every jobs in 5 years" pipe dreams
Here's what I tell people at work: It's OK to use AI, but you must say it's AI. If you post something and say "Here's what GPT-5 says" - great. Love the efficiency. If someone asks you to do something and you respond with clearly AI-generated crap masquerading as your own, you will be getting a piece of my mind.
> "Here's what GPT-5 says"
This drives me nuts. It's often wrong, but then I have to do the research to prove it before the conversation can get back on track.
I use AI mostly for writing docs and always make sure the documents have an "AI generated content" notice as the first thing readers see.
In the codebase itself I add in-line comments pointing to precisely where AI was used.
AI has proven very useful for providing extensive in-line comments too as my employer is pushing hard for our Ops guys to learn IaC despite the vast majority having zero-to-none development experience.
Contextual comments explaining _what, why & how_ loops/conditionals/etc. work has (so far anyway) proven quite successful.
It's funny how people can't help themselves, despite realizing that just around the corner is "if all you are is an interface to chatgpt, what are we paying you for exactly?".
It worries me how many people prefer using AI over doing their own thinking. How much of your life will you "live" on autopilot? Hollowing out your own soul little-by-little when you do things like that.
This is 100 times scarier, and more likely, than the "chatgpt will become skynet and nuke the world" and "Ai will replace every jobs in 5 years" pipe dreams
> How much of your life will you "live" on autopilot?
If you start doing it in school, presumably the rest of your life, since you'll have no skills or ability to learn.