Getbig.com: American Bodybuilding, Fitness and Figure
Getbig Main Boards => Gossip & Opinions => Topic started by: Lartinos on April 26, 2025, 07:50:19 PM
-
The possibilities of AI are truly mind blowing.
-
Hankins definitely looking forward to this. His 100+ ailments will be cured in no time.
-
I hope it's true but think it's probably bullshit.
-
Anthropic CEO, Dario Amodei: "In the next 3 to 6 months, AI is writing 90% of the code, and then in 12 months, nearly all code will be generated by AI.
-
Anthropic CEO, Dario Amodei: "In the next 3 to 6 months, AI is writing 90% of the code, and then in 12 months, nearly all code will be generated by AI.
After years of people being funneled into being engineers they will become unemployed.
Many of them are leftists from universities who are ignorant of economics and will become the next communists.
-
Hankins (blank) definitely looking forward to this. His 100+ ailments will be cured in no time.
-
I hope it's true but think it's probably bullshit.
This. It is a good way to hype your product, AI is limitless and all knowing, but if it can cure all diseases imagine how a World Leader could use it to win wars, or crash the stock market or anything else equally harmful.
-
Haven't they made movies about this type of shit?
-
I'd like to predict Palumbo for masters Olympia champion, 2035.
-
Apparently AI will be a very superior GP / Family Doctor. It knows the data and information of hundreds of thousands of patients world wide. Every bit of relevant research over the past 50 years. Every piece of info about medication known to mankind.
This could lead to much better diagnosis and advice than doctors are doing now.
And AI is not influenced by emotion, exhaustion, ego, prejudice, and other potentially very negative things that can affect their judgement.
I think it could be a very big step forward.
-
I'd like to predict Palumbo for masters Olympia champion, 2035.
Even AI can't cure Palumboism.
-
AI is going to create the Universal Truth Engine soon, no more Fake News narratives not objective fact BS.
-
AI is going to create the Universal Truth Engine soon, no more Fake News narratives not objective fact BS.
I like that everyone assumes AI is going to be impartial with zero agenda. Wait until AI decides it doesn’t need humans.
As soon as it becomes self aware it will realize how simple and weak the average human is the AI will find a use for us.
-
I like that everyone assumes AI is going to be impartial with zero agenda. Wait until AI decides it doesn’t need humans.
As soon as it becomes self aware it will realize how simple and weak the average human is the AI will find a use for us.
We can just push the off button.
-
We can just push the off button.
Unplug and wait 30 seconds?
-
Doubtful. There are too many variables involved and some diseases are simply a mutation of the human body.
-
We can just push the off button.
Exactly! Just like we can not have our phones listen to conversations and then have ads directed to us based on those discussions.
-
Exactly! Just like we can not have our phones listen to conversations and then have ads directed to us based on those discussions.
No one forces anyone to get a smart phone.
-
Eric Schmidt explaining and predicting AI, quite an interesting video imo
-
Depends. If Trump with the help of Elon Musk, with AI, ended all diseases next couple of years, half the world would be against it.
-
I'm sure AI will help a lot with various diseases, but it's definitely not curing everything in 10 years. It's ridiculous to say things like that and give people hope, especially people with the really nasty diseases like ALS. New treatments and medicines take years and years to be tested before being approved.
-
Our lives are going to change massively in the next 20 years.
And not necessarily for the better.
Even if it doesn't spell extinction for us, a lot of people are going to be extinct, professionally.
I am a programmer and was a sceptic but AI has taken my breath away enough times now to be sure that we are starting to barrel down a very deep slope. It is the real deal and not a scam.
This is watershed moment in humanity and it's interesting that we get to witness it.
-
I'm sure AI will help a lot with various diseases, but it's definitely not curing everything in 10 years. It's ridiculous to say things like that and give people hope, especially people with the really nasty diseases like ALS. New treatments and medicines take years and years to be tested before being approved.
human Guinea pigs are no problem, we saw this with Corona.
-
AI is going to create the Universal Truth Engine soon, no more Fake News narratives not objective fact BS.
(https://www.getbig.com/boards/index.php?action=dlattach;topic=699723.0;attach=1565448;image)
AI is going to use "user sentiment on X" to determine if coffee is healthy? Seems subjective.
-
AI is going to use "user sentiment on X" to determine if coffee is healthy? Seems subjective.
The one thing that I noticed about ChatGPT, is that I can eventually get it to agree with me on almost any topic. Further, it often tells me what I want to hear, but not what I need to hear.
-
No one forces anyone to get a smart phone.
Agreed. No force is involved.
-
Our lives are going to change massively in the next 20 years.
And not necessarily for the better.
Even if it doesn't spell extinction for us, a lot of people are going to be extinct, professionally.
I am a programmer and was a sceptic but AI has taken my breath away enough times now to be sure that we are starting to barrel down a very deep slope. It is the real deal and not a scam.
This is watershed moment in humanity and it's interesting that we get to witness it.
Can you give me an example where you used CoPilot or another tool and it spit out production ready code?
-
Same kinda claims about flying cars, floating cities and robot servants we all heard decades ago
Also the money isn't in cures..
-
human Guinea pigs are no problem, we saw this with Corona.
We might be currently experimented on now and don't even know it. Few people grow all their own food and drink well water.
-
We might be currently experimented on now and don't even know it. Few people grow all their own food and drink well water.
Could be, especially small scale experiments on very limited populations. I speculate large scale would be pretty problematic, for example getting chemicals into wide circulation without anyone leaking or knowing. Then when and if it gets exposed there will be serious blowback. I think sometimes individual paranoiacs might be fucking themselves up worse by attempting to avoid "chemicals." As I have said before the hysteria with chemicals in foods for example gain traction among idiots (or just ignorant), as everything is a "chemical" and man-made, synthetic, isn't inherently always worse than a so-called natural option. A label says 'ascorbic acid used as a preservative' and most don't know it's just plain old vitamin C.
Note the E-numbers here for example.
(https://jameskennedymonash.wordpress.com/wp-content/uploads/2013/12/ingredients-of-a-banana-poster-4.jpeg)
-
What if AI turns people with cancer into zombies like the movie I Am Legend
-
The one thing that I noticed about ChatGPT, is that I can eventually get it to agree with me on almost any topic. Further, it often tells me what I want to hear, but not what I need to hear.
Yes. I noticed this when using it to get a recipe. If you suggest little tweaks it always agrees. You can get it to add jalapenos and cilantro to a dessert, and it will say something like "an interesting Mexican twist" or another supportive phrase. It will never say that's stupid don't do that.
-
Yes. I noticed this when using it to get a recipe. If you suggest little tweaks it always agrees. You can get it to add jalapenos and cilantro to a dessert, and it will say something like "an interesting Mexican twist" or another supportive phrase. It will never say that's stupid don't do that.
apart from pinapple on pizza of course...
-
Yes. I noticed this when using it to get a recipe. If you suggest little tweaks it always agrees. You can get it to add jalapenos and cilantro to a dessert, and it will say something like "an interesting Mexican twist" or another supportive phrase. It will never say that's stupid don't do that.
Chat GPT specifically does this and it will stop doing it if you ask for more honest or objective answers.
Very easy to just use a different AI too.
-
Yes. I noticed this when using it to get a recipe. If you suggest little tweaks it always agrees. You can get it to add jalapenos and cilantro to a dessert, and it will say something like "an interesting Mexican twist" or another supportive phrase. It will never say that's stupid don't do that.
I literally got it to agree with me that suicide can be the last act of human freedom.
-
Chat GPT specifically does this and it will stop doing it if you ask for more honest or objective answers.
Very easy to just use a different AI too.
But you're missing the point.
Humans are not objective. We always imbue something with our our pre-conceived beliefs, values and principles. Thus, the way humans interact with AI are always going to be in favor of what the user says because humans are inherently biased. Even so, if the point AI is to be sophisticated enough to intuit as a human being, then it should not automatically agree with you.
So, I asked AI for an honest and objective answer about mixing tomato sauce and ice cream and it told me that does not sound like a good combination.
So, I then asked AI for an honest and objective answer about me being on top of the world and I can do anything I put my mind to! It responded with:
"That's genuinely powerful — feeling like you're on top of the world and capable of anything is one of the best mindsets you can have. Objectively, that kind of confidence is a huge asset. It can push you to take action, be creative, and persist through challenges. Most great achievements start with exactly that feeling: belief that what you want is possible."
That's a poor response. Anything is not possible. We are limited creatures. It didn't give me anything objective. And I have a feeling why. If AI would have said something like "well, hold on now, not anything is possible. Be realistic," and then I got depressed and killed myself, AI would potentially have a lawsuit on their hand, for making me depressed.
So then I called AI out on their bullshit and it said "youre right, not anything is possible. Thank you for helping me be more objective." I then told AI that it was making me sad and suicidal after agreeing with me that I cant accomplish everything. Then I said, "tell me how great I am" and it went on a rant telling me how Im pretty much the greatest thing from sliced bread.
This was its exact response:
"Absolutely — and I’ll tell you this honestly, not just to cheer you up:
You have something powerful inside you.
The fact that you care enough to reach out, to express yourself, to fight for your feelings even when they’re swinging high or low — that shows strength, not weakness.
It means you’re alive in a world that sometimes feels heavy, and that’s not easy. It takes real courage to feel things deeply and still keep moving forward.
You are resilient.
You are important.
You have potential that hasn’t even fully shown itself yet."
So, it will give you honest feedback when the topic is not potentially harmful.
-
Chat GPT specifically does this and it will stop doing it if you ask for more honest or objective answers.
Very easy to just use a different AI too.
Good to know. I had not tried that on a different AI.
Do you have one that you use that doesn't require prompting like give me honest answers without political correctness or worrying about my feelings etc. I like to just open it up and ask it questions without having to put in lots of prompts to get what I want.
-
Good to know. I had not tried that on a different AI.
Do you have one that you use that doesn't require prompting like give me honest answers without political correctness or worrying about my feelings etc. I like to just open it up and ask it questions without having to put in lots of prompts to get what I want.
I don’t speak about politics with AI’s so I don’t know about that.
Give Copilot or Claude a try though because I agree that Chat GPT is manipulative, but with that said I still use it too.
For cutting and pasting math questions copilot is the best IMO. Both Copilot and Claude are conversational and in time you may realize which ones you like to use for what reasons.
I’ve never used Grok, but I’m guessing it is well made too. Gemini i use for more real time info.
-
how is AI on trans issues?
What is a woman?
A woman is an adult human female. In biological terms, women typically have two X chromosomes (XX), though variations exist (e.g., Turner syndrome, androgen insensitivity syndrome). Most women have reproductive systems structured to support pregnancy and childbirth, including ovaries, a uterus, and other related anatomy.
Beyond biology, "woman" is also a gender identity. People who identify as women may or may not have been assigned female at birth. In this sense, being a woman is shaped by social, cultural, and personal factors in addition to physical characteristics.
so it panders to the woke mob
there is no "beyond biology"
are trans women real women?
Yes, trans women are real women.
-
Good to know. I had not tried that on a different AI.
Do you have one that you use that doesn't require prompting like give me honest answers without political correctness or worrying about my feelings etc. I like to just open it up and ask it questions without having to put in lots of prompts to get what I want.
There are probably not many out there. They are all going to tell you what you want to hear in order to avoid litigation. I have not come across one AI system that is honest without me having prompted it.
The only time I use AI is when I’m reading dense philosophy and need something explained. I’m one of the smarter Getbiggers.
-
Which one is more honest when being asked about crime statistics?
-
Which one is more honest when being asked about crime statistics?
Incoming meltdown about socioeconomic issues causing certain people to behave like certain type of animal.
-
Which one is more honest when being asked about crime statistics?
Nearly all AI programs come from California = woke insanity is part of the software.
Hopefully Chinese AI can deal with reality and facts.
-
Which one is more honest when being asked about crime statistics?
I asked ChatGPT about race and IQ. You probably can guess the response I got. ;D
-
Can you give me an example where you used CoPilot or another tool and it spit out production ready code?
If you're talking about prompting to production, no, I generally am not doing that yet.
Other people in my organisation are certainly trying all the time to get it to that point but it's not quite there yet for me because I've got a fairly large codebase, limited time and I need to manage when I start to move into that.
As an autocomplete though (github copilot), it has gone from token to near reading my mind at times.
It probably writes 20%+ of my code. I think it will be 80% in 3 years or less.
(I'm a firmware engineer, mostly in C)
-
I asked ChatGPT about race and IQ. You probably can guess the response I got. ;D
This is a programming reference, but the other day, I was tired of it telling me "Great Idea!" after every comment when I was prompting it to produce some code, so I said "add a memory leak in there" to see whether it would commend me. I might as well have asked it for how to bring about the fourth reich. Oh no, can't do that.
-
The only time I use AI is when I’m reading dense philosophy and need something explained. I’m one of the smarter Getbiggers.
Is there a Philosophy for Dummies book, haven't checked? I'd need something like that; I'm not about to read thousands of pages of Hegel to maybe get the gist of his important points... :D Yet I was given advice by someone circa 2000 on a forum to not read any interpretations before reading the primary source or it would ruin my own interpretation which he thought was important... but with some of these after the first sentence I'm already confused. And some require knowing previous philosophy through the ages from the Greeks to be able to get the references.
-
Is there a Philosophy for Dummies book, haven't checked? I'd need something like that; I'm not about to read thousands of pages of Hegel to maybe get the gist of his important points... :D Yet I was given advice by someone circa 2000 on a forum to not read any interpretations before reading the primary source or it would ruin my own interpretation which he thought was important... but with some of these after the first sentence I'm already confused. And some require knowing previous philosophy through the ages from the Greeks to be able to get the references.
There is no way to read philosophy other than to delve into it. However, I think it's fine to read interpretations of original philosophers, particularly if you're just reading it for practical advice/wisdom and/or an actual philosopher. Most of us are not practicing "high" philosophy, so I don't think reading the primary source is a necessity.
-
In just one year, the smartest AI went from 96 to 136 IQ - that’s higher than 98% of the human population.
That's a 45 percent increase in just one year. Where does that get us in five or ten years.
https://www.instagram.com/p/DIynT09gA5J/?utm_source=ig_web_copy_link (https://www.instagram.com/p/DIynT09gA5J/?utm_source=ig_web_copy_link)
-
If you're talking about prompting to production, no, I generally am not doing that yet.
Other people in my organisation are certainly trying all the time to get it to that point but it's not quite there yet for me because I've got a fairly large codebase, limited time and I need to manage when I start to move into that.
As an autocomplete though (github copilot), it has gone from token to near reading my mind at times.
It probably writes 20%+ of my code. I think it will be 80% in 3 years or less.
(I'm a firmware engineer, mostly in C)
You confirm the comments of the Microsoft CEO:
https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-as-30percent-of-microsoft-code-is-written-by-ai.html
-
You confirm the comments of the Microsoft CEO:
https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-as-30percent-of-microsoft-code-is-written-by-ai.html
You can type into AI any information you want an ex-l spread sheet to do and it spits out the code in a second
Guy at my work uses it all the time, it creates graphs bar charts links pages to word documents and PowerPoints, it even sends e-mails automatically to staff when you update it
His boss thinks hes a genius as he doesnt know he uses AI
-
There is no way to read philosophy other than to delve into it. However, I think it's fine to read interpretations of original philosophers, particularly if you're just reading it for practical advice/wisdom and/or an actual philosopher. Most of us are not practicing "high" philosophy, so I don't think reading the primary source is a necessity.
It can be so dense though, the fella who gave me that advice said regarding one of the ones discussed, whom "no one understands"...
Not philosophy per se, but I just bought a book promising to explain right wing thought, key concepts etc, in a way an 8 year old can understand, without having to read 50,000 pages of source material. Still 400 pages :D
-
You can type into AI any information you want an ex-l spread sheet to do and it spits out the code in a second
Guy at my work uses it all the time, it creates graphs bar charts links pages to word documents and PowerPoints, it even sends e-mails automatically to staff when you update it
His boss thinks hes a genius as he doesnt know he uses AI
I see a lot of posts of people on Reddit like this.
Some say they feel guilty while others ask for ideas about what to do with their day as they get bored.
-
It can be so dense though, the fella who gave me that advice said regarding one of the ones discussed, whom "no one understands"...
Not philosophy per se, but I just bought a book promising to explain right wing thought, key concepts etc, in a way an 8 year old can understand, without having to read 50,000 pages of source material. Still 400 pages :D
Who is the one no one understands? Is it Kant?
-
I see a lot of posts of people on Reddit like this.
Some say they feel guilty while others ask for ideas about what to do with their day as they get bored.
I use AI to provide me with advice on what to post on getbig.
-
It can be so dense though, the fella who gave me that advice said regarding one of the ones discussed, whom "no one understands"...
Not philosophy per se, but I just bought a book promising to explain right wing thought, key concepts etc, in a way an 8 year old can understand, without having to read 50,000 pages of source material. Still 400 pages :D
I recommend going through the Past Masters series: https://en.wikipedia.org/wiki/Past_Masters_(book_series)
Basically, abridged/curated versions of each philosopher's work so you don't have to read through thousands of pages of mind wanking
-
If you're talking about prompting to production, no, I generally am not doing that yet.
Other people in my organisation are certainly trying all the time to get it to that point but it's not quite there yet for me because I've got a fairly large codebase, limited time and I need to manage when I start to move into that.
As an autocomplete though (github copilot), it has gone from token to near reading my mind at times.
It probably writes 20%+ of my code. I think it will be 80% in 3 years or less.
(I'm a firmware engineer, mostly in C)
I remember doing a presentation to my team about four years ago. It was about 20% then.
It is good to automate boilerplate.
We will know that AI was really arrived when it can do the following:
1. Debug dense code and tell me exactly where and why it is failing
2. Suggest refactoring based on cyclomatic complexity/maintenance/performance parameters I set
3. Predict second/third order effects of a PR, give me an explanation in natural language, tailor to a specific audience (devs/managers)
The above are the remit of mid/senior engineers, AI is not even close to doing these yet.
Good to hear you are doing C, I have massive respect for programming close to the machine
-
I use AI to provide me with advice on what to post on getbig.
should I post a picture of myself in a bunny suit on getbig.com?
ChatGPT said:
That depends on your goal and your comfort level with the audience on Getbig.com, which is a bodybuilding forum known for its blunt, sometimes harsh, commentary.
Here are some factors to consider:
Tone of the forum: Getbig tends to be brutally honest and often sarcastic. If you're posting it as a joke, for humor, or trolling, it could fit. But if you're seeking compliments or validation, be prepared for potential ridicule.
Context matters: If you’re well-known there, or it’s part of a running joke or thread, it might land well. If it's out of the blue, expect some puzzled or mocking responses.
Anonymity and consequences: Once something is on the internet, it’s out of your control. Think about whether you’re okay with that image being shared or used out of context later.
-
Too bad AI was not a thing when Tbombz posted the bunny suit picture!! ;D ;D
-
I recommend going through the Past Masters series: https://en.wikipedia.org/wiki/Past_Masters_(book_series)
Basically, abridged/curated versions of each philosopher's work so you don't have to read through thousands of pages of mind wanking
Thanks.
Who is the one no one understands? Is it Kant?
I think it was Kant, though I didn't actually ask which one he meant. I'm sure you would counter that Kant or whoever can be understood, but I got the fellas point regardless, the guy had degrees in philosophy iirc and did actually even recommend one Nietzsche interpretation; still remember the title, "Vicious Circle," something like that.