Getbig.com: American Bodybuilding, Fitness and Figure
Getbig Main Boards => Gossip & Opinions => Topic started by: Kwon on October 09, 2025, 09:14:32 AM
-
Soon no actors needed
-
Awesome.
-
Soon no actors needed
Kwon, check out the parts where the AI actors are standing in the rain — there are no raindrops bouncing off their bodies. It looks more like a rain filter or video layer composited over the footage in DaVinci or Premiere, rather than real interaction. I’m still waiting for AI to truly match real-world physics. The lighting, shadows, and reflections all look generic, based on whatever the model was trained on. I wonder if AI models will eventually be able to factor in ray tracing to make it look physically accurate.
-
Soon no actors needed
This was done in 1993 and better than that AI Slop.
-
-
Kwon, check out the parts where the AI actors are standing in the rain — there are no raindrops bouncing off their bodies. It looks more like a rain filter or video layer composited over the footage in DaVinci or Premiere, rather than real interaction. I’m still waiting for AI to truly match real-world physics. The lighting, shadows, and reflections all look generic, based on whatever the model was trained on. I wonder if AI models will eventually be able to factor in ray tracing to make it look physically accurate.
Give it 4-6 months Obsi
-
-
Compare with ai of one year ago , "Will smith eating Spaghetti".
-
Give it 4-6 months Obsi
I just don't know if I can ever trust the shadows, reflections and general lighting.
How would you do this with AI?
(https://i.postimg.cc/RVJRLVzB/KWON.gif)
-
Soon no actors needed
Weird Al did it better. Built different.
-
Here's a few images. I used this 360 degree HDRI for the scene:
https://polyhaven.com/a/derelict_airfield_02
I created several cubes of different sizes — 72", 96", and 24", based on my preference. The spheres have specific dimensions as well. Then I added the road bike to the scene. The torus partially sinks into the ground. Two of the cubes use mirror materials, the gold sphere has a blurry reflection, and the chrome sphere a perfect mirror reflection. The 3D “KWON” lettering sits on one of the cubes.
In the animation, the camera starts with a 15 mm wide-angle lens, then zooms in to 30 mm while I slightly rotate the view.
There’s no way current AI generators could create a scene like this — with accurate double reflections, realistic shadows, and consistent 3D spatial relationships. I’m sure they’ll get there eventually, but it’s hard to say when.
(https://i.postimg.cc/Ns7PKx6v/KWON00.jpg)
(https://i.postimg.cc/tqtmsz3j/KWON18.jpg)
(https://i.postimg.cc/ChH6R4GT/KWON-V5.jpg)
(https://i.postimg.cc/Dv2PM3G2/KWON-V2.jpg)
The image below has DOF (Depth of Field) enabled — the handlebars are in sharp focus while the background is softly blurred. This is the level of precision I want when creating 3D imagery. I can control the exact focal distance of the DOF to achieve the look I want.
(https://i.postimg.cc/HY4PrtwC/KWON-V3.jpg)
In the image below, Depth of Field (DOF) is disabled, resulting in a fully sharp image where all elements remain in focus.
(https://i.postimg.cc/rVCYD9Sb/KWON-V3-NO-DOF.jpg)
-
Nice bike Obsidian!
(https://i.postimg.cc/6prNfWYM/image.png)
-
There were no humvee's in 1975.
-
Very impressive. It has come far.
-
In 1 or 2 years AI will look more real than real.
-
Nice bike Obsidian!
(https://i.postimg.cc/6prNfWYM/image.png)
Haha thanks. ;D 8)
-
AI and Raytracing needs to merge. The end goal. Faster renderings for 3D content creators, and physically accurate reflections and shadows for AI content creators.
-
-
Weird Al did it better. Built different.
Ha! UHF was the first thing I thought of when I saw the op post.
-
AI can’t render video longer than 4seconds without artifacts/hallucinations reaching unacceptable levels. Hardware demands increase quadratically as video length increases, IIRC. I don’t think AI will be replacing Hollywood anytime soon.
-
AI can’t render video longer than 4seconds without artifacts/hallucinations reaching unacceptable levels. Hardware demands increase quadratically as video length increases, IIRC. I don’t think AI will be replacing Hollywood anytime soon.
Is one year soon?
-
Soon no actors needed
TERMINATOR 2 WAS TRYING TO WARN US!!!! SKYNET IS BASED ON AI AND TESLA!!!!!!
-
Ai Conan
-
I asked Grok and ChatGPT to create:
Can you create a zoomed out wide angle landscape 16:9 image of a Ferrari on a sunny beach, with a 100% reflective cube behind it at a 45 degree angle. There should also be a reflective sphere behind the car. Space the cube and sphere away from each other. A man should stand between the sphere and the cube. Breaking waves should be in the distance. Shadows and reflections should be accurate.
The reflections are jacked up. It will be a while before these become accurate like 3D raytracing engines.
Grok was faster than ChatGPT, but ChatGPT gave more options to tweak the setting before generating it.
ChatGPT Below:
(https://i.postimg.cc/y6mG5dXp/FERRARI-CHATGPT.jpg)
Grok created these two below:
(https://i.postimg.cc/4460MdQT/FERRARI-GROK-01.jpg)
(https://i.postimg.cc/fTMPhNBq/FERRARI-GROK-02.jpg)
-
Here’s a 3D scene rendered with ray tracing. Notice the lighting - the bike’s shadows falling on the first car, and the first car’s shadows cast onto the second. Also take a look at the reflections in the spheres. These are ray-traced reflections and shadows — something AI-generated “reflections” can only fake for now and can’t accurately reproduce anytime soon.
For these renderings, I used an infinite sphere without a ground texture. Because of that, the tire tracks shift between views. It’s possible to render with a finite sphere and a defined ground plane instead, which can then be scaled so the footprints appear correctly sized and remain consistent across views. However, that approach introduces visible distortion where the finite sphere meets the ground plane. To avoid that issue, I rendered this scene using just the infinite sphere.
(https://i.postimg.cc/TR1LdN16/CARS-1.jpg)
-
Here’s a 3D scene rendered with ray tracing. Notice the lighting - the bike’s shadows falling on the first car, and the first car’s shadows cast onto the second. Also take a look at the reflections in the spheres. These are ray-traced reflections and shadows — something AI-generated “reflections” can only fake for now and can’t accurately reproduce anytime soon.
For these renderings, I used an infinite sphere without a ground texture. Because of that, the tire tracks shift between views. It’s possible to render with a finite sphere and a defined ground plane instead, which can then be scaled so the footprints appear correctly sized and remain consistent across views. However, that approach introduces visible distortion where the finite sphere meets the ground plane. To avoid that issue, I rendered this scene using just the infinite sphere.
(https://i.postimg.cc/TR1LdN16/CARS-1.jpg)
(https://i.postimg.cc/69q2Wbq8/CARS-2.jpg)
(https://i.postimg.cc/ZTQ9CPvZ/CARS-3.jpg)
(https://i.postimg.cc/zJQLVwHz/CARS-4.jpg)
(https://i.postimg.cc/KZWKR73j/CARS-5.jpg)
(https://i.postimg.cc/knHVBNRF/CARS-6.jpg)
-
Same scene with a different HDRI and zoomed out a bit more.
https://polyhaven.com/a/freight_station
The tires are obviously too clean. With effort sand and debris can be added so it looks more integrated with the scenes. These were just quick studies to illustrate the ray traced shadows and reflections.
(https://i.postimg.cc/R4t28vL8/CARS-7.jpg)
(https://i.postimg.cc/TR1LdN16/CARS-1.jpg)
-
Thanks for estabilishing my greatness with that logo Obsie!
(https://i.postimg.cc/JnjKPvrK/image.png)
(https://i.postimg.cc/KzhDkfPq/image.png)
(https://i.postimg.cc/CxYSFB6B/image.png)
(https://i.postimg.cc/sXzpmJn3/image.png)
(https://i.postimg.cc/D0pnfnmS/image.png)
-
Amazing.
-
Amazing.
(https://media1.tenor.com/m/6LlMbfnAXMkAAAAd/jesse-lee-peterson-amazin.gif)
-
;D 8)
(https://i.postimg.cc/YSy0t650/TWERKINGA.gif)
-
This is AI ;D
(https://image.cdn2.seaart.me/static/upload/20250627/c0237f88-66a7-4fea-a9a4-3f582fefe9b2.gif)
-
3× slow motion. These GIFs look jumpy and are heavily downscaled — the original 4K animations at 120 FPS look much smoother.
(https://i.postimg.cc/J040cTCV/TWERKINGA-SLOWMO.gif)
-
Thanks for estabilishing my greatness with that logo Obsie!
(https://i.postimg.cc/JnjKPvrK/image.png)
(https://i.postimg.cc/KzhDkfPq/image.png)
(https://i.postimg.cc/CxYSFB6B/image.png)
(https://i.postimg.cc/sXzpmJn3/image.png)
(https://i.postimg.cc/D0pnfnmS/image.png)
Haha glad I could help!
-
The AI videos of chiropractors throwing old ladies through walls are fantastic.
*”Deep breath in” then proceeds to belly to back suplex grandma through some drywall*
-
THE LANDLORD!!!
-
The AI videos of chiropractors throwing old ladies through walls are fantastic.
*”Deep breath in” then proceeds to belly to back suplex grandma through some drywall*
Haha the AI you always wanted!
-
(https://i.postimg.cc/fTnsw-4dN/image.png)
-
This looks interesting! Animating in 3D is hard. AI to the rescue?!
-
-
Haha the AI you always wanted!
OOOOOO YEEEEEEAH
-
Kwon, check out the parts where the AI actors are standing in the rain — there are no raindrops bouncing off their bodies. It looks more like a rain filter or video layer composited over the footage in DaVinci or Premiere, rather than real interaction. I’m still waiting for AI to truly match real-world physics. The lighting, shadows, and reflections all look generic, based on whatever the model was trained on. I wonder if AI models will eventually be able to factor in ray tracing to make it look physically accurate.
Spot on
-
Kwon, check out the parts where the AI actors are standing in the rain — there are no raindrops bouncing off their bodies. It looks more like a rain filter or video layer composited over the footage in DaVinci or Premiere, rather than real interaction. I’m still waiting for AI to truly match real-world physics. The lighting, shadows, and reflections all look generic, based on whatever the model was trained on. I wonder if AI models will eventually be able to factor in ray tracing to make it look physically accurate.
Duly noted, ai CAN handle rain physics today, it just takes so much longer time
I will show you some good rain physics and bodily interaction soon
-
Ai making hits from the 80s
-
ChatGPT; Grok, Perplexity, Gemini and OpenArt.ai
Try them out!
(https://i.postimg.cc/QtTW0M3t/image.png)
-
ChatGPT; Grok, Perplexity, Gemini and OpenArt.ai
Try them out!
(https://i.postimg.cc/QtTW0M3t/image.png)
Now make her twerk!
I have a tool that can make her face come alive - I'll try it when I get a chance. There are services that can use that image as a reference and then create a video from it.
-
Now make her twerk!
I have a tool that can make her face come alive - I'll try it when I get a chance. There are services that can use that image as a reference and then create a video from it.
Yes, you can also get rich on OF with those tools.
Images and Vids (of your imaginary model you created) of her dancing and whatnot.
Broke Getbiggers use this guide
-
Here's a video I saw on Sora. You can click on it and edit the prompt to change it. Crazy shit!
(https://i.postimg.cc/fT8z6h5z/ANGRY-WOMAN.gif)
https://sora.chatgpt.com/g/gen_01k1xpkys5e5dvz4k87yta6kkd
-
Yes, you can also get rich on OF with those tools.
Images and Vids (of your imaginary model you created) of her dancing and whatnot.
Broke Getbiggers use this guide
Nice! My models will all have big 70s bushes. There's a market for that I am sure! ;D
BIG KAHUNA 70S BUSH!!
-
Here's a video I saw on Sora. You can click on it and edit the prompt to change it. Crazy shit!
(https://i.postimg.cc/fT8z6h5z/ANGRY-WOMAN.gif)
https://sora.chatgpt.com/g/gen_01k1xpkys5e5dvz4k87yta6kkd
I have made several vids but i dont think Getbig is ready for them yet :D
-
I have made several vids but i dont think Getbig is ready for them yet :D
Getbig is ready! Post it! ;D
-
I have made several vids but i dont think Getbig is ready for them yet :D
We are ready for anything.
-
Here’s another AI tool: LivePortrait. It lets you drive a source video using a driving video. In this example, I use a 3D animation as the source and drive its head movement, eyes, and mouth with a video of Ashley Judd. Both videos have the same number of frames. The 3D video’s angle isn’t identical by design, so it’s not a perfect match — but it effectively conveys the concept.
Edit: I just noticed the hair shifts slightly due to the AI adjustment. That can be fixed with masking — I’ll check it out later, just out of curiosity.. ;D
https://liveportrait.org/
(https://i.postimg.cc/3xYB1VSg/LP2-3D.gif)
(https://i.postimg.cc/sDVm9b0c/LP2-AJ.gif)
-
Duly noted, ai CAN handle rain physics today, it just takes so much longer time
I will show you some good rain physics and bodily interaction soon
Yes, I’ve seen some AI videos that mimic fluid physics quite convincingly. It’s going to get wild once raytracing-level fidelity is integrated into AI generative models.
-
Here’s another AI tool: LivePortrait. It lets you drive a source video using a driving video. In this example, I use a 3D animation as the source and drive its head movement, eyes, and mouth with a video of Ashley Judd. Both videos have the same number of frames. The 3D video’s angle isn’t identical by design, so it’s not a perfect match — but it effectively conveys the concept.
Edit: I just noticed the hair shifts slightly due to the AI adjustment. That can be fixed with masking — I’ll check it out later, just out of curiosity.. ;D
https://liveportrait.org/
(https://i.postimg.cc/3xYB1VSg/LP2-3D.gif)
(https://i.postimg.cc/sDVm9b0c/LP2-AJ.gif)
Do a similar one, but with Bhanks or Goodrum instead of Ashley Judd :D
-
Here’s another AI tool: LivePortrait. It lets you drive a source video using a driving video. In this example, I use a 3D animation as the source and drive its head movement, eyes, and mouth with a video of Ashley Judd. Both videos have the same number of frames. The 3D video’s angle isn’t identical by design, so it’s not a perfect match — but it effectively conveys the concept.
Edit: I just noticed the hair shifts slightly due to the AI adjustment. That can be fixed with masking — I’ll check it out later, just out of curiosity.. ;D
https://liveportrait.org/
(https://i.postimg.cc/3xYB1VSg/LP2-3D.gif)
(https://i.postimg.cc/sDVm9b0c/LP2-AJ.gif)
I fixed the hair with DaVinci Resolve via the Fusion Page. I noticed Markus Ruhl once had DaVinci on his computers when he was showing his editing setup. At least I think it was him. ;D
(https://i.postimg.cc/RF1n6DM7/LP2-3D-HAIR-FIXED.gif)
Below is a screenshot of DaVinci Resolve. The clip with the wrong hair is on the left side, and the correction on the right side. I used Adobe Premiere and Adobe After Effects 8-10 year ago. After Effects is still very capable and has editing features that DaVinci lacks. For example the Liquify tool that's also present in Photoshop. Only in After Effects you can liquify a video. But DaVinci has closed the gap and I prefer its node interface. Most people work in the Color Page of DaVinci. I prefer the Fusion Page.
(https://i.postimg.cc/fwynCdQx/DAVINCI-HAIR-FIX.jpg)
-
Do a similar one, but with Bhanks or Goodrum instead of Ashley Judd :D
;D
(https://i.postimg.cc/k4ys9Ng8/LP3-WW-GORILLA.gif)
(https://i.postimg.cc/gJKsdy2Q/LP3-WW-VG.gif)
-
https://www.bitchute.com/video/yQkNi4oDXEgO
-
-
Great stuff!