Game graphics are toast (Gaming)
by Cody Miller
, Music of the Spheres - Never Forgot, Monday, March 16, 2026, 23:38 (28 days ago)
What is this absolute dogshit
Game graphics are toast
by Vortech
, A Fourth Wheel, Tuesday, March 17, 2026, 09:36 (28 days ago) @ Cody Miller
![[image]](https://media.discordapp.net/attachments/724575486570790962/1483208572816064663/DLSS_5_Off.png?ex=69ba69e0&is=69b91860&hm=f80eb0e5aa002a3789bf53883eac64bd5ae41b1880601a006338e0851a9533d3&=&format=webp&quality=lossless&width=1994&height=1494)
Game graphics are toast
by CyberKN
, Oh no, Destiny 2 is bad, Tuesday, March 17, 2026, 14:09 (27 days ago) @ Cody Miller
That is both awful and astounding
by ZackDark
, Not behind you. NO! Don't look., Tuesday, March 17, 2026, 17:00 (27 days ago) @ Cody Miller
Technically? Running that in real-time is downright unbelievable
Artistically? Terrible, should have never been a thing. Everyone involved should be ashamed of themselves. This is why universities usually enforce humanities studies even in very technical courses.
That is both awful and astounding
by stabbim
, Des Moines, IA, USA, Thursday, March 26, 2026, 23:50 (18 days ago) @ ZackDark
Technically? Running that in real-time is downright unbelievable
That I will agree with. Every other form of genAI I've seen takes quite a bit of time to run. Whatever they're doing here must be dumbing down the process to some degree, but it's still getting results. I suppose the fact that it isn't having to generate the whole scene probably helps a lot, but still.
I don't get the negativity. This looks fantastic.
by Kermit
, Raleigh, NC, Friday, March 20, 2026, 12:00 (24 days ago) @ Cody Miller
- No text -
I agree. This looks fantastic.
by Coaxkez, got that plasma/BR55 hit, Friday, March 20, 2026, 12:48 (24 days ago) @ Kermit
Fantastically bad, that is.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Friday, March 20, 2026, 13:37 (24 days ago) @ Coaxkez
Fantastically bad, that is.
Shows what I know. I'm not a graphics supremacist. The whole package matters when it comes to games. My reaction seems in line with digital foundry. Is the issue that it supposedly changes the artwork? It looks like it does in the first comparison more than the others. for the most part I feel like it changes the lighting such that it's more realistic.
I wish you all would stop talking about it as it is so obvious that it sucks. Maybe I'm a dummy but you don't have to be obnoxious. I'm genuinely perplexed and interested in knowing why you think it is so bad.
I agree. This looks fantastic.
by Avateur
, Friday, March 20, 2026, 14:47 (24 days ago) @ Kermit
This might help explain it a bit more: https://www.ign.com/articles/it-turns-out-game-artists-dont-love-dlss-5-despite-nvidias-claims
I agree. This looks fantastic.
by Cody Miller
, Music of the Spheres - Never Forgot, Saturday, March 21, 2026, 09:17 (24 days ago) @ Kermit
It hallucinates.
It looks like ass.
It doesn’t get the lighting right.
It murders the concept of “art style” in a video game. And just art in general.
It’s generative AI.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 07:54 (23 days ago) @ Cody Miller
It hallucinates.
I know what those words mean. I don’t know how it manifests itself here in this context.
It looks like ass.
Opinion
It doesn’t get the lighting right.
Opinion
It murders the concept of “art style” in a video game. And just art in general.
Does it? I don’t assume that developers have no control over the output.
It’s generative AI.
I guess it’s self-evident that whatever you mean by those words is super plus bad.
Preference cascades are a helluva thing to witness.
It hallucinates.
I know what those words mean. I don’t know how it manifests itself here in this context.
Look at the Starfield miner comparison. It literally makes his nose twice as wide as it actually is, because it misinterprets a shadow as part of his nose.
It doesn’t get the lighting right.
Opinion
I don’t think that’s an opinion either. It literally changes the mood of the lighting. I guess we can argue “right” all day long, but it’s like the lighting changes in 343 Guilty Spark in CE Anniversary. Sure, someone is certainly allowed to think that version looks better, but I would question that person’s opinion on pretty much everything at that point.
There’s a scene from Oblivion in a town where it just erases all the shadows. It’s like it turned a bright sunny day into an overcast one. The same thing happens in a wide shot of Assassin’s Creed Shadows. All the character lighting is fucked—it ignores the actual lighting condition in the scene and it looks like each character is lit by a separate source light instead of existing in the surrounding environment. There’s a shot from Starfield of a dude in a hat. It completely erases the shadows cast by the hat, like he has a light directly on his face instead of being out by the actual environment.
It murders the concept of “art style” in a video game. And just art in general.
Does it? I don’t assume that developers have no control over the output.
I’m sure they will, but there’s artists and devs from pretty much every game that was shown saying it looks like shit and they certainly didn’t do anything to make it look like that. It’s clear they had no hand in whatever was going on in those demos.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 09:08 (23 days ago) @ cheapLEY
It hallucinates.
I know what those words mean. I don’t know how it manifests itself here in this context.
Look at the Starfield miner comparison. It literally makes his nose twice as wide as it actually is, because it misinterprets a shadow as part of his nose.
It doesn’t get the lighting right.
Opinion
I don’t think that’s an opinion either. It literally changes the mood of the lighting. I guess we can argue “right” all day long, but it’s like the lighting changes in 343 Guilty Spark in CE Anniversary. Sure, someone is certainly allowed to think that version looks better, but I would question that person’s opinion on pretty much everything at that point.There’s a scene from Oblivion in a town where it just erases all the shadows. It’s like it turned a bright sunny day into an overcast one. The same thing happens in a wide shot of Assassin’s Creed Shadows. All the character lighting is fucked—it ignores the actual lighting condition in the scene and it looks like each character is lit by a separate source light instead of existing in the surrounding environment. There’s a shot from Starfield of a dude in a hat. It completely erases the shadows cast by the hat, like he has a light directly on his face instead of being out by the actual environment.
It murders the concept of “art style” in a video game. And just art in general.
Does it? I don’t assume that developers have no control over the output.
I’m sure they will, but there’s artists and devs from pretty much every game that was shown saying it looks like shit and they certainly didn’t do anything to make it look like that. It’s clear they had no hand in whatever was going on in those demos.
The studios involved signed off on the demos. They had to. Cody has strong opinions about the overemphasis on photorealism in games (does the man have weak opinions about anything?). I’m mostly agree with him on that. One positive outcome of this is that maybe the resources involved in making games photorealistic can be spent on other aspects of the game.
Digital foundry put out a newer video on this. I haven’t finished watching it, but so far I’m getting a better idea of the concerns that I got from the discussion here.
Exactly. The studios did. Not the actual developers. The CEO of these companies said yeah, sure, that looks great. I’d be surprised if any actual artists looked at any of this and said the same thing.
I agree. This looks fantastic.
by Cody Miller
, Music of the Spheres - Never Forgot, Sunday, March 22, 2026, 18:18 (22 days ago) @ Kermit
It doesn’t get the lighting right.
Opinion
Objective fact.
It is adding lighting completely disconnected from the environment. An overcast rainy day in the Resident Evil demo has the AI doing 3 point portraiture lighting coming from a totally different direction than the motivated light sources. It is an incredible mismatch and kills the scene visually.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 18:54 (22 days ago) @ Cody Miller
It doesn’t get the lighting right.
Opinion
Objective fact.It is adding lighting completely disconnected from the environment. An overcast rainy day in the Resident Evil demo has the AI doing 3 point portraiture lighting coming from a totally different direction than the motivated light sources. It is an incredible mismatch and kills the scene visually.
I'll concede that, but I think it was more noticeable on the old woman. Other things were definitely improved, though. Skin texture and hair, for instance.
I don't think any of them look like ass, but you're using technical terms, I guess. As I've read more I better understand the issues.
I agree. This looks fantastic.
by Vortech
, A Fourth Wheel, Sunday, March 22, 2026, 09:00 (23 days ago) @ Kermit
If you go in a Best Buy or some other place that sells TVs, the TVs will be set to showroom mode. Showroom mode, also known as store mode or demo mode, is a setting on TVs used in stores to showcase the television's features like maximum brightness and contrast. This mode is not suitable for regular home viewing as it can distort picture quality and is designed primarily for attracting customers in stores. But stores do it anyway, because people don't buy TVs based on how realistic they look, or how closely they produce the visuals in the source image. In market testing there is a HUGE preference toward TVs that were in showroom mode. Many respondents said they liked it because it seemed more natural or true to life. It makes the TV look "better" because people are initially drawn to bright lights and highly saturated colors.
A while back — at what feels like a decade ago — someone came up with Facetune. An app that bundled a suite of the most common portrait photo retouching tools together into a single focused app. This didn't break any ground, it just made tools that were buried in Photo editing apps easier to access and use. The runaway success of the app lead to many clones and competitors, all of which were trying to win marketshare and so we had a reinforcement spiral of ever more powerful features. The problem is powerful also means greater alterations from the original photo. Soon within a certain cohort, every photo de rigour was of an uncanny person who was edited closer and closer to a fairly narrow ideal of beauty, and the minimum level of acceptability is well above any living person. When people saw photos that were not edited, they thought people looked far "worse" then they do, because what was natural had been formed in an environment where everything - both professional and commercial images were all crafted to create a new different reality, and thus that became their concept of reality. Not only was this warping people understanding of the actual world, individual problems were heightened. People were dehumanized and othered because they were normal, but outside the ideal.
As tv makers had started to fulfill the desires of customers, they started looking for ways to improve TVs that people never asked for. Motion smoothing was invented. I think it's fairly well known, and I'm running out of time, so here is some background if you need it. Computers, in a broad brush changed the work of filmmakers because people who were looking at how to prompt sales in a market where the product was mature and the feature level was high
I see this new feature as another in this line of un-requested features that indiscriminately replaces the intention of artists and champions the idea of "realism" (which is rarely what the purpose of art is) while simultaneously totally shifting the very conception of reality in a directions that is harmful to society. They are doing this, not because it is intrinsically better, but because it's a new thing that they can shout about to sell something that isn't much better at doing what people had actually been asking it to do. It will probably work — as these moves have often worked in the past — for the same reason kids make themselves sick Halloween night and ERs are full of injuries on July 4th. Salt and Sugar makes food "better" but it's also fairly easy to develop a tolerance that grows ever higher until you lose perspective. Bright lights and colors biologically draw attention, but it's not the only beauty in the world.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 11:28 (22 days ago) @ Vortech
If you go in a Best Buy or some other place that sells TVs, the TVs will be set to showroom mode. Showroom mode, also known as store mode or demo mode, is a setting on TVs used in stores to showcase the television's features like maximum brightness and contrast. This mode is not suitable for regular home viewing as it can distort picture quality and is designed primarily for attracting customers in stores. But stores do it anyway, because people don't buy TVs based on how realistic they look, or how closely they produce the visuals in the source image. In market testing there is a HUGE preference toward TVs that were in showroom mode. Many respondents said they liked it because it seemed more natural or true to life. It makes the TV look "better" because people are initially drawn to bright lights and highly saturated colors.
A while back — at what feels like a decade ago — someone came up with Facetune. An app that bundled a suite of the most common portrait photo retouching tools together into a single focused app. This didn't break any ground, it just made tools that were buried in Photo editing apps easier to access and use. The runaway success of the app lead to many clones and competitors, all of which were trying to win marketshare and so we had a reinforcement spiral of ever more powerful features. The problem is powerful also means greater alterations from the original photo. Soon within a certain cohort, every photo de rigour was of an uncanny person who was edited closer and closer to a fairly narrow ideal of beauty, and the minimum level of acceptability is well above any living person. When people saw photos that were not edited, they thought people looked far "worse" then they do, because what was natural had been formed in an environment where everything - both professional and commercial images were all crafted to create a new different reality, and thus that became their concept of reality. Not only was this warping people understanding of the actual world, individual problems were heightened. People were dehumanized and othered because they were normal, but outside the ideal.
As tv makers had started to fulfill the desires of customers, they started looking for ways to improve TVs that people never asked for. Motion smoothing was invented. I think it's fairly well known, and I'm running out of time, so here is some background if you need it. Computers, in a broad brush changed the work of filmmakers because people who were looking at how to prompt sales in a market where the product was mature and the feature level was high
I see this new feature as another in this line of un-requested features that indiscriminately replaces the intention of artists and champions the idea of "realism" (which is rarely what the purpose of art is) while simultaneously totally shifting the very conception of reality in a directions that is harmful to society. They are doing this, not because it is intrinsically better, but because it's a new thing that they can shout about to sell something that isn't much better at doing what people had actually been asking it to do. It will probably work — as these moves have often worked in the past — for the same reason kids make themselves sick Halloween night and ERs are full of injuries on July 4th. Salt and Sugar makes food "better" but it's also fairly easy to develop a tolerance that grows ever higher until you lose perspective. Bright lights and colors biologically draw attention, but it's not the only beauty in the world.
Thanks for your thoughtful response. For the record, you're talking to someone who had a betamax machine because it was superior to VHS, chose vinyl over tapes, CDs over mp3s, and 4k discs over streaming. I care about how art is presented. I care about how reality is presented. I am someone who immediately turns off motion smoothing, and I try to calibrate my TV to more closely give me what the filmmaker intended. I don't want to overstate it. I don't pay to have my TVs calibrated, but the fact that I know you can I hope speaks to my sensitivity about these issues, and I'm not some yeehaw impressed by something shiny.
My reaction to this demo was based on whether the improvements make the image look more photorealistic, and the answer in most cases was yes. That's not everything I care about. Far from it. But I had zero emotional attachments toward the content I saw nor did I have a strong opinion about what it should look like, artistically. I think about Levi's critique of 343's addition of detail to Forerunner structures--what a smart critique it was because it depended on knowledge of what he was looking at.
I also don't spend that much time on social media, so my more limited experience with filters and AI slop did not come to mind as a comparison. I still don't think that's a completely fair comparison, and I think some of the rhetoric about this is over the top. In the new Digital Foundry video, they showed some images where someone applied DSSI 5 but adjusted the tone, and the results were good but less flashy. One of them suggested that those who produced the demo punched things up aggressively for effect. Similarly, I see YouTubers with these hyperbolic thumbnails about this and THEY'VE punched up the images dishonestly to make the difference even more dramatic.
I think reasonable people can disagree about how well DSSI 5 worked in each case, and consider pluses and minuses. I don't see it as clear cut. I think it's easy to get emotional about AI because it is a threat. I'm scared to it, and yet, it routinely impresses me. I do think we need to fight for what is human to remain, and my hope/prediction is that what is human-generated willl become more valuable to people.
You keep talking about photorealistic, and I kinda think it's missing the point.
Maybe it makes things look more realistic, but it also changes fundamental things about how the game is presented. Look at the comparison of Grace. Maybe she looks more realistic (I think that's debatable), but she doesn't even look like the same person. That's the issue.
I agree. This looks fantastic.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 19:09 (22 days ago) @ cheapLEY
You keep talking about photorealistic, and I kinda think it's missing the point.
Maybe it makes things look more realistic, but it also changes fundamental things about how the game is presented. Look at the comparison of Grace. Maybe she looks more realistic (I think that's debatable), but she doesn't even look like the same person. That's the issue.
I get that. The first instance looks more like a video game character to me before DSSI. And I do think it's one of most dramatic examples of actual changes in that her lips seem bigger and you can tell she's wearing make up. One of the guys on digital foundry talked about this being a famous moment and the character on her way to a murder scene and would not likely be wearing make up. That's all information that was unavailable to me and will probably always be in that I've never and probably will not play Resident Evil. Still, I don't think the changes are more significant than changes to Lara Croft from game to game.
I was also influenced by the digital foundry guys, who had made a video just as they came out of demo where they were hands on with ithe tech and saw things that weren't as translatable to the YouTube video. Their emphasis was on the photorealism, and that's what I judged too. Again, that's not all that matters, but that's all I was talking about.
I agree. This looks fantastic.
by Cody Miller
, Music of the Spheres - Never Forgot, Sunday, March 22, 2026, 18:24 (22 days ago) @ Vortech
I see this new feature as another in this line of un-requested features that indiscriminately replaces the intention of artists and champions the idea of "realism" (which is rarely what the purpose of art is) while simultaneously totally shifting the very conception of reality in a directions that is harmful to society. They are doing this, not because it is intrinsically better, but because it's a new thing that they can shout about to sell something that isn't much better at doing what people had actually been asking it to do.
So much silicon is being wasted on this shit, when they could be putting more of it to use doing rendering of actual pixels. Especially with premature push to 4K gaming, you want as much of the GPU being used to render the scene.
They could also work to try to solve issues with shader compilation. Being a PC gamer just looks so miserable, with stutters and uneven performance everywhere.
But no. We are gonna waste R&D and die space on this pile of horse shit.
Perhaps this will help.
by Bones
, The Last City, Earth, Sol System, Sunday, March 22, 2026, 06:07 (23 days ago) @ Kermit
![[image]](https://i.imgur.com/5h3b4dO.png)
Perhaps this will help.
by Kermit
, Raleigh, NC, Sunday, March 22, 2026, 08:26 (23 days ago) @ Bones
If this means devs can’t control the output, that’s one thing. I don’t know that to be the case.I didn’t like 343’s lighting choices in Halo Anniversary and we didn’t have the bogeyman of AI to blame.
One thing is, I don’t have a platonic ideal of what any of the characters in these videos should look like. I’m sure that affects perceptions. I’m sure people have their favorite version of Lara Croft. Technology changes what’s possible. Games don’t need to look photorealistic. Everything is contextual. I’m not like, “oh crap! Look what they did to Grace!” I have no idea who Grace is. I thought the faces in Starfield were bad. I think this technology makes Starfield’s graphics better. And in general, simply judging by the standard of photo realism, all of them are improvements.
Bruh.
by Bones
, The Last City, Earth, Sol System, Sunday, March 22, 2026, 10:30 (22 days ago) @ Kermit
And in general, simply judging by the standard of photo realism, all of them are improvements.
The crux of the argument is that many gamers do not believe that to be true, and that this version of 'photorealism' isn't the one we should be chasing.
And in general, simply judging by the standard of photo realism, all of them are improvements.
The crux of the argument is that many gamers do not believe that to be true, and that this version of 'photorealism' isn't the one we should be chasing.
I don't think photorealism is the end all. Many gamers do. Some, like Cody, don't. From what I can tell what upsets people is perceived changes to what was intended or what they expect. That's a different thing than photorealism. So is the AI issue the ethics around that. I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
When it comes to generative AI, there is no middle ground.
I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
When it comes to generative AI, there is no middle ground.
It's a tool, which can be used for good or ill. I'm not denying the ill. I've got some serious concerns, but I'm also hopeful about some potentially life-saving benefits. You all are too young to be such Luddites.
Bruh.
by Cody Miller
, Music of the Spheres - Never Forgot, Wednesday, March 25, 2026, 12:32 (19 days ago) @ Kermit
I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
When it comes to generative AI, there is no middle ground.
It's a tool, which can be used for good or ill.
There is no good use for generative AI. By its very nature it is actively destructive to human culture, connection, creativity, and self actualization.
Double Bruh.
by Bones
, The Last City, Earth, Sol System, Thursday, March 26, 2026, 19:58 (18 days ago) @ Cody Miller
There is no good use for generative AI. By its very nature it is actively destructive to human culture, connection, creativity, and self actualization.
It's been pretty good at taking a texture I've fed it and outputting an infinitely-tiling version; that shit is tedious and time-consuming.
Double Bruh.
by stabbim
, Des Moines, IA, USA, Thursday, March 26, 2026, 23:46 (18 days ago) @ Bones
It's been pretty good at taking a texture I've fed it and outputting an infinitely-tiling version; that shit is tedious and time-consuming.
I'm aware that one can get the desired output, but for me that still doesn't negate the ethical problem with generative AI. Nearly all of this stuff is built on what I see as theft. One could train it on opted-in data, in theory, and that would be fine IMO (at least as far as the ethics of the training data goes - there's also the loss of the creative process which I don't love, tedium aside). But not many are doing that, and it's usually unknown to the potential user anyway, so how are they even supposed to make a judgement call? I guess what I'm saying is, for those of us on this side of the debate, saying there's no good use doesn't literally mean you can't ever get what you wanted out of it - just that there's no use case that's a net positive. Even when it does what you wanted, it's still bad, from that point of view.
Double Bruh.
by Bones
, The Last City, Earth, Sol System, Friday, March 27, 2026, 09:26 (18 days ago) @ stabbim
I'm aware that one can get the desired output, but for me that still doesn't negate the ethical problem with generative AI. Nearly all of this stuff is built on what I see as theft. One could train it on opted-in data, in theory, and that would be fine IMO (at least as far as the ethics of the training data goes - there's also the loss of the creative process which I don't love, tedium aside). But not many are doing that, and it's usually unknown to the potential user anyway, so how are they even supposed to make a judgement call? I guess what I'm saying is, for those of us on this side of the debate, saying there's no good use doesn't literally mean you can't ever get what you wanted out of it - just that there's no use case that's a net positive. Even when it does what you wanted, it's still bad, from that point of view.
Oh, for sure - there's ethical and existential crises abound with GenAI's implementation. I'm on your side of the debate. I'm on the other side, too. I've also become a troll-ish old man who laughs at absolute declarations and has found a perverse humor in (gently) goading Cody on his statements.
I... I think I'm becoming Korny. Is this the human equivalent of carcinization?
I... I think I'm becoming Korny. Is this the human equivalent of carcinization?
LMAO
Life-saving benefits? I think maybe you don't mean the same thing by generative AI that I would.
Life-saving benefits? I think maybe you don't mean the same thing by generative AI that I would.
I bet you're right.
Bruh.
by stabbim
, Des Moines, IA, USA, Friday, March 27, 2026, 10:34 (17 days ago) @ Kermit
edited by stabbim, Friday, March 27, 2026, 10:38
Life-saving benefits? I think maybe you don't mean the same thing by generative AI that I would.
I bet you're right.
So, when I say generative AI, I mean AI which generates content. Image, video, and audio generators, and basically any chat bot or similar - the text output being the generated content. As an example of the other case, I wouldn't consider a model trained on X-Rays to identify spots on lungs as generative AI. Flagging X-Rays as potential cancer isn't generating content, to me.
The reason that I default to being negative about them is that, for the most part, the models were trained on just... whatever content they could find on the internet. As much text, image, audio or video as they could get their hands on. Whether it's copyrighted, public domain, or somewhere in between, probably wasn't taken into account and in my estimation that's theft. It might not be intended to directly duplicate the content, but it's still using it without the creator's consent, usually as part of a product to make money for people who are not those creators (worth considering, at least in the early days basically all of the training data would have been created before generative AI existed or was widely known, and thus the authors couldn't possibly have consented). And sometimes it does duplicate the content. Despite the name, AI is fundamentally dumb and lacks the judgement or awareness to realize when it has made a copy, so occasionally it'll happen.
Edit/addition: I don't tend to have these concerns about the medical diagnostic stuff, for example. I would assume most of that work is being done by academic institutions and/or medical technology companies and they probably have authorization to use the medical data for research. That seems completely fair, and I would think any concerns about that content would be more about privacy than copyright/intellectual property.
Now, if every company would say what their models were trained on, we as consumers could at least make a judgement call, although IMO very few companies would choose to limit their training data because more is almost always better (unless the data in question is poor quality). But they almost never say anything, so for those of us who have these concerns, we just have to assume it's all poison by default most of the time.
So is the AI issue the ethics around that. I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
When it comes to generative AI, unless someone can prove to me that their model was trained ENTIRELY on content whose creators opted in to such use, then I don't see how there can be a middle ground. It's theft that the law mostly hasn't caught up to yet because the people making the law are either too old to understand what's happening, or they've got financial interests in allowing it to continue.
On top of that, it's theft that uses enormous resources and generates soulless "content" that's inevitably going to be a middle-ground amalgam of things the model has previously seen, but gets passed off as "art" by people who have no appreciation for creativity and think art is a product that you purchase by the hundredweight. That's why people don't like DLSS 5, by the way - it's got nothing to do with how realistic the graphics do or don't look, and everything to do with it not being what the artists, actual humans with ideas and points of view and things they wanted to communicate, worked hard to design in a specific way (the on/off comparison images are humor, and they signal to others that you're on that side of the debate, but graphics quality was never the actual point), only for some clueless tech evangelist to stumble drunkenly into the studio, spill a bucket of paint on the canvas, and declare that it looks better that way before hiccuping and passing out.
You will sometimes hear an argument that it's "just like a person learning," by the way. First of all, no it isn't - while I will grant you that what a person learns informs their future cognition in a big way, it is possible, however rare, for us to come up with new applications and occasionally entirely new ideas. The old nothing new under the sun argument is a useful lens to look at the world through and understand that you're always building on what came before, but it's not entirely true. Sometimes people do have thoughts that haven't existed before. These neural networks can do no such thing. Second of all, none of us humans were built by a corporation for the express purpose of ripping off ideas as a business model. We exist, and we learn, and perhaps we even plagiarize things sometimes! But we are not a commercial product which was designed, invested in, built (and shoved into people's lives who didn't ask for it), solely for that purpose. And what exactly does all this theft, and sapping lazy people of the creative experience, accomplish? A bunch of tech bros get to drive another investment bubble. Cool. Cool cool cool cool cool.
* There are other forms of machine learning which can be ok ethically. I'm looking forward to developments around medical diagnostics, protein folding, all kinds of stuff, and I wish more of the investment money was going in that direction. But those things aren't usually what's being debated. I only mention them to be clear on where I draw the line and why generative was in italics.
So is the AI issue the ethics around that. I think anything AI has become a hot button, and like many other issues, people act like there is no middle ground.
When it comes to generative AI, unless someone can prove to me that their model was trained ENTIRELY on content whose creators opted in to such use, then I don't see how there can be a middle ground. It's theft that the law mostly hasn't caught up to yet because the people making the law are either too old to understand what's happening, or they've got financial interests in allowing it to continue.
In many cases they can. You're focusing on the artistic realm. There is much happening beyond that.
On top of that, it's theft that uses enormous resources and generates soulless "content" that's inevitably going to be a middle-ground amalgam of things the model has previously seen, but gets passed off as "art" by people who have no appreciation for creativity and think art is a product that you purchase by the hundredweight. That's why people don't like DLSS 5, by the way - it's got nothing to do with how realistic the graphics do or don't look,
Then why do people say things like "It looks like ass." If the goal is photorealism, that's not a solely artistic or subjective judgment.
and everything to do with it not being what the artists, actual humans with ideas and points of view and things they wanted to communicate, worked hard to design in a specific way (the on/off comparison images are humor, and they signal to others that you're on that side of the debate, but graphics quality was never the actual point), only for some clueless tech evangelist to stumble drunkenly into the studio, spill a bucket of paint on the canvas, and declare that it looks better that way before hiccuping and passing out.
How do you really feel, stabbim? :)
You will sometimes hear an argument that it's "just like a person learning," by the way. First of all, no it isn't - while I will grant you that what a person learns informs their future cognition in a big way, it is possible, however rare, for us to come up with new applications and occasionally entirely new ideas. The old nothing new under the sun argument is a useful lens to look at the world through and understand that you're always building on what came before, but it's not entirely true. Sometimes people do have thoughts that haven't existed before. These neural networks can do no such thing. Second of all, none of us humans were built by a corporation for the express purpose of ripping off ideas as a business model. We exist, and we learn, and perhaps we even plagiarize things sometimes! But we are not a commercial product which was designed, invested in, built (and shoved into people's lives who didn't ask for it), solely for that purpose. And what exactly does all this theft, and sapping lazy people of the creative experience, accomplish? A bunch of tech bros get to drive another investment bubble. Cool. Cool cool cool cool cool.
You don't have to convince me on the humans first debate. I have my concerns, too, as a creator and consumer.
* There are other forms of machine learning which can be ok ethically. I'm looking forward to developments around medical diagnostics, protein folding, all kinds of stuff, and I wish more of the investment money was going in that direction. But those things aren't usually what's being debated. I only mention them to be clear on where I draw the line and why generative was in italics.
A lot is. This is a fan forum for an artistic endeavor at the end of the day, and I appreciate and share many of the concerns expressed. I don't think AI is useless for artists. For example, it can be helpful for organizing, for giving feedback, and for generating ideas, but my feeling about most of AI output is that you have to know the subject, and the more you know about the subject, the more you can recognize bad output. The ethical piece is a big part of this which needs to be worked out. My company, like many tech companies is all about AI now, at least rhetorically. Some of that is marketing, but I do know that much effort is being put into making sure AI is used ethically. Not every activity humans do is about an artist expressing their soul. I think it's possible that AI can reduce tedium and help humanity. I recognize that it is dangerous, too. I simply reject the categorical statements about it that I've seen here.
Bruh.
by Claude Errera
, Friday, March 27, 2026, 09:35 (18 days ago) @ Kermit
In many cases they can. You're focusing on the artistic realm. There is much happening beyond that.
I don't find this particularly useful in this debate. If you have specific examples, great, give them. ("It can be helpful for organizing" is not what I mean.) The issues with generative AI are (very often) specific to the ethical value of what's being created; if you have examples of generative AI being used in ways that are useful to humans without stealing from them, please, by all means, share.
But the coy responses you've been giving in this thread really add nothing except an "I know something you don't know" vibe.
Bruh.
by Kermit
, Raleigh, NC, Friday, March 27, 2026, 17:49 (17 days ago) @ Claude Errera
edited by Kermit, Friday, March 27, 2026, 17:58
In many cases they can. You're focusing on the artistic realm. There is much happening beyond that.
I don't find this particularly useful in this debate. If you have specific examples, great, give them. ("It can be helpful for organizing" is not what I mean.) The issues with generative AI are (very often) specific to the ethical value of what's being created; if you have examples of generative AI being used in ways that are useful to humans without stealing from them, please, by all means, share.
But the coy responses you've been giving in this thread really add nothing except an "I know something you don't know" vibe.
Claude, you're so good at projecting the vibe of "I'm just a neutral observer here." As an observer, you might also call out the broadsides weaving conspiracies or the sweeping catastrophizing statements. I was trying to say it's not that simple, and I feel a bit singled-out for not providing footnotes.
If I came across as coy, let me say this about what I know: I know enough to not be arrogant about what I know. As I've alluded to, I work for a company that provides software (some of which incorporate AI agents) to many companies, academic institutions, and government agencies. From a professional perspective, I know more about the tools than specifics about the deliverables. I'm aware there are ethical pitfalls, and we should support companies that care about such things (Anthropic being a recent example in the news--I like to think that my company falls into that category). "Stealing" is a provocative and slippery concept when you're talking about publicly available data, but in many cases the data used by these AI are in the public domain or are proprietary to the institutions using them. I’m not an expert, but companies are increasingly relying on synthetic, proprietary, or purpose-built datasets to train AI. These are being used in the field of medical imaging (I think stabbim mentions this in a different post), self-driving cars, fraud detection, and QA. I work in publications, for instance, and we've developing AI that will answer your questions--it will generate content untouched by human hands, yet based on content we produced. My company owns that content. I know people whose job it is to make sure that our software integrates and uses AI ethically. Privacy, for instance, is a BIG concern. (Before someone says I'm biased because I have a professional interest--the bulk of my career is in the rearview mirror. I'm not that attached. I'm kind of glad to be toward the end of my career. Being at the beginning would be cool. Would not like to be in the middle.)
Also, I am allergic to what appear to me as simplistic narratives. I would react the same way if someone said AI was assuredly going to save mankind. AI is definitely going to be disruptive. So was the internet. Many of the best predictions about that didn't come true, but the worst predictions didn't either--we didn't correctly anticipate the good or the bad. I lean towards optimism. Plus, I'm old. I've seen things. You are old enough to have read FUTURE SHOCK in the 70s. My God, it's a wonder we're alive. We were supposed to be replaced by robots 35 years ago.
A friend at work who's a little bit younger than me told me about how he brought up AI to his adult kids, and they all but held up a cross and hissed at him. The reactions on this forum have been educational in that same way. I think there is bit of hysteria in the mix. I agree with those who say AI is problematic for creatives (but I don't agree it's of no use to them). I also agree there are ethical issues around how LLM models were trained. I don't have the answers regarding how it will or should shake out. (We all care about intellectual property across the board, right?) To get to the issue that started this debate, I think photorealism is a problematic goal, but a proportion of gamers want it and make purchasing decisions based on it. Game developers have a budget and a deadline. It's a tough industry and expectations are through the roof. If there is a tool that gives devs control but helps them deliver photorealism (making their games more marketable) while allowing them more time to focus on story and gameplay, then I'm not against that. (Artists already use such tools—Adobe Firefly is an example trained on licensed content). I want the companies that make good games to be able to have enough success to keep making them. The demands of art and commerce are often at odds. Finally, in this context, I guess my optimism rests on the belief that human-created art, whether with chalk or digital tools, will always be highly valued. I believe there will always be a demand for and therefore mechanisms to authenticate human-created art.
Bruh.
by Cody Miller
, Music of the Spheres - Never Forgot, Saturday, March 28, 2026, 22:15 (16 days ago) @ Kermit
Finally, in this context, I guess my optimism rests on the belief that human-created art, whether with chalk or digital tools, will always be highly valued. I believe there will always be a demand for and therefore mechanisms to authenticate human-created art.
This is the objection that is most fundamental, free from issues of copyright and stealing. No. No it won't. That's the era Generative AI will usher in if it becomes ubiquitous. Generative AI works in their ouroboros artistic cannibalism will become the new objects of hyperreality, which will influence how we approach and see the real world. The real and hyperreal are not distinct and separate, but bleed together influencing each other so that you can no longer meaningfully tell which is which. Real art will thus be incompatible and no longer ring true.
As I said. The very death of human culture.
We already basically seeing it, even without AI. I can't recall any of the details, but it wasn't that long ago that someone who worked on a show for Netflix said they're basically making them as dumb as possible, because they know most people just have Netflix on in the background while they scroll TikTok or whatever. At a certain point, why shouldn't those shows just be made by AI if no one actually gives a fuck anyway? It's sad.
If It's what I'm thinking of, it's not quite as dire as that. The statement was that they were writing under the assumption that people were also using a second screen while watching. That meant working lots of informational recap into the dialog ("As you know, Bob, this is the person who stole the maguffin") and having something happen every so often that would bring people's attention back.
Now, I'm not saying that this makes for great art, but it's not so different than the way soap operas were and are written for much the same reasons, or CNN Headline news, which was formatted around the idea that the audience would be transitory, or even Batman Serials because kids are easily distracted. Some TV is just assumed to be something that is on in the background during life. I think the problem is Netflix is something people thought of as different/better than that (when in practice we have seen Netflix run all the same plays once they got near-monopoly power in the market).
My point is, it's not really all that new of a development, so at best you could say that the slide continues, in a direction we don't like. At the same time, radio used to be an appointment event for the family and eventually became that thing that makes commutes less boring.
If It's what I'm thinking of, it's not quite as dire as that. The statement was that they were writing under the assumption that people were also using a second screen while watching. That meant working lots of informational recap into the dialog ("As you know, Bob, this is the person who stole the maguffin") and having something happen every so often that would bring people's attention back.
Now, I'm not saying that this makes for great art, but it's not so different than the way soap operas were and are written for much the same reasons, or CNN Headline news, which was formatted around the idea that the audience would be transitory, or even Batman Serials because kids are easily distracted. Some TV is just assumed to be something that is on in the background during life. I think the problem is Netflix is something people thought of as different/better than that (when in practice we have seen Netflix run all the same plays once they got near-monopoly power in the market).
My point is, it's not really all that new of a development, so at best you could say that the slide continues, in a direction we don't like. At the same time, radio used to be an appointment event for the family and eventually became that thing that makes commutes less boring.
Vortech said it better than I could, but for more clarity:
Ben Affleck and Matt Damon claimed that Netflix producers asked for more exposition in the script such that viewers who weren't paying close attention wouldn't get lost so easily. Netflix has pushed back on this claim. I'm the first to admit that streaming services have made a lot of forgettable stuff, along with some gems. I think this is part of a larger story about the fracturing of culture. It's not the death of culture, though, IMHO.
Finally, in this context, I guess my optimism rests on the belief that human-created art, whether with chalk or digital tools, will always be highly valued. I believe there will always be a demand for and therefore mechanisms to authenticate human-created art.
This is the objection that is most fundamental, free from issues of copyright and stealing. No. No it won't. That's the era Generative AI will usher in if it becomes ubiquitous. Generative AI works in their ouroboros artistic cannibalism will become the new objects of hyperreality, which will influence how we approach and see the real world. The real and hyperreal are not distinct and separate, but bleed together influencing each other so that you can no longer meaningfully tell which is which. Real art will thus be incompatible and no longer ring true.As I said. The very death of human culture.
Hyperbole, thy name is Cody Miller. Alternatively, in this dystopian Blade Runner world where cloned snakes (speaking of ouroboros) perform with strippers, actual snakes will be extremely valuable.
I acknowledge that you may be right in that there are those that say that AI will destroy humanity. If that’s the case, then yup, no humans, no human culture. Otherwise, I disagree with your prognosis.
Look at history and how many times your prediction has been made, usually after some ground-breaking invention or art form. I’ll list a few: the printing press, the steam engine, the novel, radio, motion pictures, the automobile, airplanes, comic books, TV, computers, video games, and of course, the Internet.
More specifically, look at something like Napster. It is certainly true that it caused the death of the music industry as it was circa 1990. It gave rise to the streaming services that we have today. One could argue that those streaming services do not pay artists enough, and that it's more difficult to make a living as a musician or it is not as lucrative as it once was (or that music isn’t as central to our culture as it once was). Yet people are still making and enjoying music.
I hear ringing of the death knell for your industry, but maybe the motion picture industry is transforming, not dying. Movies aren’t having as big an impact on the culture as they used to, but in some ways, movie making is more democratic than ever. I see great movies every year (fewer from Hollywood, but thank goodness human culture doesn’t spring solely from Hollywood). There are real pain points. Cassandras are continually identifying new poisons, but these art-killing toxins are met with pushback from creators. For example, those who champion practical effects over CGI. Those who champion film over digital. But even when new technologies are adopted, authenticity is king. Linklater couldn’t get the film stock that Godard used to make BREATHLESS, but he nonetheless could achieve something similar by shooting digital through certain lenses.
You might say that these examples are not analogous because AI is categorically different in that AI can replace the creative act itself. I think AI can only replicate the creative act, and the results might even fool us, but the minute we know a human wasn’t involved, it loses value. It’s a magic trick—not magic. Appreciation of the human involvement in the creation of art is a critical part of the creative circuit. Intrinsic in our appreciation is the knowledge that this is a form of a like being communicating to a like being. We aren’t going to stop wanting or valuing that.
As long as there are people, creatives will find ways to break through with authentic art, even in a field crowded with mediocre knock-offs. Your argument seems to be that once AI slop becomes common, there will be no more bandwidth. I disagree. There may be a fallow period, but identifiably human creation will find expression because the need for it is part of who we are. To paraphrase a fictional mathematician, art created by humans will find a way.
Bruh.
by EffortlessFury
, Monday, March 30, 2026, 02:44 (15 days ago) @ Kermit
You might say that these examples are not analogous because AI is categorically different in that AI can replace the creative act itself. I think AI can only replicate the creative act, and the results might even fool us, but the minute we know a human wasn’t involved, it loses value.
Two issues with this. Firstly, there are an unfortunate number of people who don't actually believe it loses value; they genuinely don't care. Secondly, the insidious bit about genAI is that it grows increasingly difficult to tell whether something was created by it or by a human. If the prompter never discloses that something was made with AI, and it becomes indistinguishable from handmade art, then when does the moment of value loss surface?
You might say that these examples are not analogous because AI is categorically different in that AI can replace the creative act itself. I think AI can only replicate the creative act, and the results might even fool us, but the minute we know a human wasn’t involved, it loses value.
Two issues with this. Firstly, there are an unfortunate number of people who don't actually believe it loses value; they genuinely don't care. Secondly, the insidious bit about genAI is that it grows increasingly difficult to tell whether something was created by it or by a human. If the prompter never discloses that something was made with AI, and it becomes indistinguishable from handmade art, then when does the moment of value loss surface?
I think enough people will care such that verifying human involvement will be its own thing—maybe a cottage industry or technology will help because that verification adds to the value. Remember, I’m arguing against the “very death of human culture.” There’s gonna be a lot not to like. There’s a lot not to like now.
We can dismiss the loss of professional art as "Hollywood" and say art will survive because it is part of human expression, and say human audiences will always prefer human expression…but.
But, the reality is the people who decide what to fund rarely make that decision on what the audience will value most, but rather what is good enough. Nobody prefers narrow seats with less legroom, but that's what we got on planes because capitalism is not about making what is best, it is about maximizing value of a minimum viable product, and then milking the uber rich for upgrades. See also furniture that survives more than a couple years. See also washing machines that will never get repaired. Will human art become a product only available to the ultra wealthy? We see that dichotomy now with original vs. reproduction, but what will prevent it from becoming true of source, not just object?
But, plenty of people don't seem to care, or even prefer the absence of humanity. Self-checkout at the supermarket is less efficient than what we had before. It's not a surprise. Why would you think one person who does this once a week would be more efficient than 2 people working together all day. Not to mention you have one person for a whole set of issues, adding a delay. But even after it became clear that it was not about efficiency, people still chose it. The lines are literally longer for a worse experience and the store can happily fire those people and save money. I can only assume people prefer to avoid human contact. We have a generation that reports being literally afraid to talk to someone live on the phone. Hugely popular social media accounts are fabricated. Vocaloid singers are some of the biggest artists in Music.
But, nobody is born able to create at a high level. Someone needs to fund failure, because that's where people grow. If all of that stuff gets fed to the LLM, how will the future artists we will need in the future eat?
But, for all of the downsides of mixing art and commerce — many of them listed above — someone needs to pay for it. We had a time in Europe where art was not funded. We called it "The Dark Ages." It preceded the Rennesaince: a near Cambrian explosion of new ideas and forms of expression all kicked off by an idea sweeping the land that Humanism mattered, and that someone other than The Church could fund art. Not great for access if you're not a Medecci, but rocks in a pond make ripples. Art became a commerce in itself, but also got folded into all sorts of previously unconnected industries like fashion and architecture, furniture, pottery because someone funded the development of those skills. The objects of daily life, which constitute the vast majority of lived existence could be touched by the thoughtful intention of a person. Separating out functional design — free to be taken over by the soulless — and some other Art with a capitol "A" may feel like it's preserving the art that matters, but it's washing away most of the impact art has on our lives.
I agree with you on one thing — this isn't new. But it's not a direction I'm comfortable with, and it is a huge acceleration.
We can dismiss the loss of professional art as "Hollywood" and say art will survive because it is part of human expression, and say human audiences will always prefer human expression…but.
I said Hollywood in part just because that's where the most narcissists live. Great movies come from all over the world. Iran. Utah. I respect art professionals and professionalism in art.
But, the reality is the people who decide what to fund rarely make that decision on what the audience will value most, but rather what is good enough. Nobody prefers narrow seats with less legroom, but that's what we got on planes because capitalism is not about making what is best, it is about maximizing value of a minimum viable product, and then milking the uber rich for upgrades. See also furniture that survives more than a couple years. See also washing machines that will never get repaired. Will human art become a product only available to the ultra wealthy? We see that dichotomy now with original vs. reproduction, but what will prevent it from becoming true of source, not just object?
It won't. With a decent prompt I can create pretty good "art" in the style of Matisse and have it printed and framed for cheaper than doing so by buying a print at Michael's. Better than a blank wall, but not worth much.
(An aside: I don't have your view of capitalism. What you say is true of monopolies and crony capitalism, and it's true that in this century we've become risk adverse and don't allow creative destruction to happen as often it should, but to say that's what capitalism is about doesn't give credit to free markets and what they have provided for the world. It's probably best to say we probably have different philosophies about this.)
But, plenty of people don't seem to care, or even prefer the absence of humanity. Self-checkout at the supermarket is less efficient than what we had before. It's not a surprise. Why would you think one person who does this once a week would be more efficient than 2 people working together all day. Not to mention you have one person for a whole set of issues, adding a delay. But even after it became clear that it was not about efficiency, people still chose it. The lines are literally longer for a worse experience and the store can happily fire those people and save money. I can only assume people prefer to avoid human contact. We have a generation that reports being literally afraid to talk to someone live on the phone. Hugely popular social media accounts are fabricated. Vocaloid singers are some of the biggest artists in Music.
I share many of these concerns. Covid and the resulting lockdowns fucked up the world and a lot of people have lingering mental illness. I believe social media is poison. I see signs of pushback, and I take the longview. I may not live to see us fully recover but humans are the same as they've ever been. We're wired to need human contact and to be interested in each other. I believe in cycles. Whatever trend is happening, a countertrend is brewing.
But, nobody is born able to create at a high level. Someone needs to fund failure, because that's where people grow. If all of that stuff gets fed to the LLM, how will the future artists we will need in the future eat?
But, for all of the downsides of mixing art and commerce — many of them listed above — someone needs to pay for it. We had a time in Europe where art was not funded. We called it "The Dark Ages." It preceded the Rennesaince: a near Cambrian explosion of new ideas and forms of expression all kicked off by an idea sweeping the land that Humanism mattered, and that someone other than The Church could fund art. Not great for access if you're not a Medecci, but rocks in a pond make ripples. Art became a commerce in itself, but also got folded into all sorts of previously unconnected industries like fashion and architecture, furniture, pottery because someone funded the development of those skills. The objects of daily life, which constitute the vast majority of lived existence could be touched by the thoughtful intention of a person. Separating out functional design — free to be taken over by the soulless — and some other Art with a capitol "A" may feel like it's preserving the art that matters, but it's washing away most of the impact art has on our lives.
Patronage is always an issue. Starving artists existed before AI. There is a school of thought that serious artists don't really have a choice in it--they are going to create art regardless. I do worry about how people are educated and develop taste, but that's not a new worry either.
I agree with you on one thing — this isn't new. But it's not a direction I'm comfortable with, and it is a huge acceleration.
You make a lot of good points. One of the problems with arguing against the pessimism is that I can't describe what it is that will keep everything from being terrible. From my perspective, it feels like every era I've lived through has been the best and the worst of times simultaneously. I love the arts, and all I can say is every year someone makes something fresh that blows me away. It can be a movie, a record, a book, a TV show, a game. It felt like it used to happen more regularly, but I'm probably jaded now. The point is, it hasn't stopped happening. If I find out that something that blows me away was created by AI, I might have rethink my priors, but that hasn't happened yet. I have a hard time conceptually distilling humans out of the equation.
Final word on the "very death of human culture": throughout history all doomsayers have underestimated our ability to adapt.
Bruh.
by Claude Errera
, Friday, April 03, 2026, 13:08 (10 days ago) @ Vortech
Self-checkout at the supermarket is less efficient than what we had before. It's not a surprise. Why would you think one person who does this once a week would be more efficient than 2 people working together all day. Not to mention you have one person for a whole set of issues, adding a delay. But even after it became clear that it was not about efficiency, people still chose it. The lines are literally longer for a worse experience and the store can happily fire those people and save money. I can only assume people prefer to avoid human contact.
I agree with a lot of what you said in this post, but I wanted to point out that this is demonstrably wrong (at least in my experience).
It is quite possible that a cashier/bagger combo can process my groceries faster than I can... but I can do it in about 40 seconds (on one of my average shopping trips), so the savings is going to be negligible... and completely wiped out by the 10 minutes I need to wait in the line that has a cashier (at my supermarket) compared to the almost-always-immediate availability of a self-checkout station (even in busy times, my wait is under a minute). It's been 13 years since I used a different setup regularly... but my recollection is similar (if not exact) in the last house, where the circumstances of checkout were vastly different (the supermarket was a 20 minute drive instead of a 3-block walk, I visited once every week or so instead of every couple of days, there were 10 or so manned checkouts instead of 2, and there were 5 or so self-checkout stations instead of 20), but the self-vs-manned calculation was actually about the same (it was usually faster to do it myself, if you count the time from entering the checkout line to the time exiting the store).
Clearly your situation is different - I'm just pointing out that 'people prefer to avoid human contact' hasn't been necessary to explain my choices for at least 27 years, in two substantially different shopping situations.
Bruh.
by Coaxkez, got that plasma/BR55 hit, Tuesday, March 31, 2026, 13:49 (13 days ago) @ Kermit
edited by Coaxkez, Tuesday, March 31, 2026, 13:54
As long as there are people, creatives will find ways to break through with authentic art, even in a field crowded with mediocre knock-offs. Your argument seems to be that once AI slop becomes common, there will be no more bandwidth. I disagree. There may be a fallow period, but identifiably human creation will find expression because the need for it is part of who we are. To paraphrase a fictional mathematician, art created by humans will find a way.
I think the issue is less the death of the human creative impulse and more the death of the ability to earn any money from being creative. In a capitalist society, money drives progress. No money, no progress.
And, as others have said, there is a great concern that genAI will lead to a scenario in which human-crafted art is indistinguishable from machine output. If that scenario were to become a reality, I would imagine that creativity will move increasingly into the live performance space, where the impact of genAI will be felt to a lesser degree. (But it will still be felt.)
There's a very healthy discussion going on here, but I'm too busy right now to get into the weeds of this topic (not to mention too laconic in general). Honestly, it depresses me and I try not to think about it too much either.
As long as there are people, creatives will find ways to break through with authentic art, even in a field crowded with mediocre knock-offs. Your argument seems to be that once AI slop becomes common, there will be no more bandwidth. I disagree. There may be a fallow period, but identifiably human creation will find expression because the need for it is part of who we are. To paraphrase a fictional mathematician, art created by humans will find a way.
I think the issue is less the death of the human creative impulse and more the death of the ability to earn any money from being creative. In a capitalist society, money drives progress. No money, no progress.
Thanks, Coaxkez, but I would tweak that to change the causal direction. In a capitalist society, innovation is rewarded. One could add the disclaimer "in theory" to both of our statements, but that's a broader argument beyond our scope.
About rewards, I've always believed that art could be both great and popular (that is, lucrative)--the Beatles, STAR WARS, Shakespeare, for example. I don't think AI changes that. Yes, AI makes it easier to make derivative art, and the latter has always been good enough for many people, but I believe the really great stuff can rise to the top. Can AI make the really great stuff? I'm not convinced. It's like Steve Martin says, be so good they can't ignore you. I think there will always be people who strive for that. And that output will stand out. Let's bring it back to this forum and subjects of interest here. I think that Marathon's art style is truly great (one employee's much-publicized mistake aside). It's fresh and interesting. It fits the lore in that almost every object is 3-D printed. Maybe Bungie will fail and that would support your thesis, but I bet this game will be talked about for a long time, regardless. What's good is good.
There seems to be a thread in some of this discussion that all-powerful forces are behind everything that happens, and they can decide what becomes popular. There is a long history of people who have wanted to decide what becomes popular, but that doesn't mean they can. Payola might have been able to buy radio airtime, but it couldn't guarantee a hit. Decca rejected the Beatles. Lucas's peers thought a rough cut of STAR WARS was an embarrassment. Andy Weir had to self-publish THE MARTIAN. There is more content than ever and more people to consume it (ever read this? A thousand fans may be all you need). The model for monetization has been busted many times over, and I think new models will rise to replace them. I concede that I can't describe these models in detail, but crowdfunding is an example.
And, as others have said, there is a great concern that genAI will lead to a scenario in which human-crafted art is indistinguishable from machine output. If that scenario were to become a reality, I would imagine that creativity will move increasingly into the live performance space, where the impact of genAI will be felt to a lesser degree. (But it will still be felt.)
I agree. We value what we know is human. Maybe, like an infinite number of monkeys, AI can come up with Shakespeare, but I'll believe it when I see it. I think it's possible that there is a new equilibrium, that as people become more exposed to AI creations they become more attuned to other signals, and reward accordingly. An example is CGI, which used to be enough to get people into the theater. Now we want more.
There's a very healthy discussion going on here, but I'm too busy right now to get into the weeds of this topic (not to mention too laconic in general). Honestly, it depresses me and I try not to think about it too much either.
I've enjoyed it because I find the subject fascinating. There is plenty to get depressed about, and I feel that, too, especially when I spend too much time online, where hype and hate are the battling gods.
I hate that I feel like I am opening the door I don;t have time to explain what's inside. But I am an expert in copyright law, and
but in many cases the data used by these AI are in the public domain
Needs to be understood in the context of Public Domain being VERY hard to determine even within one set of laws like America, let alone internationally. Available to access by everyone, is not the same thing as being in the Public Domain.
or are proprietary to the institutions using them. I’m not an expert, but companies are increasingly relying on synthetic, proprietary, or purpose-built datasets to train AI.
I'm aware of many products that use proprietary data for context training, but none that use only it (and PD work) for a foundation model.
I hate that I feel like I am opening the door I don;t have time to explain what's inside. But I am an expert in copyright law, and
but in many cases the data used by these AI are in the public domain
Needs to be understood in the context of Public Domain being VERY hard to determine even within one set of laws like America, let alone internationally. Available to access by everyone, is not the same thing as being in the Public Domain.
This is true. I think that in my original comments, when read in full, I acknowledge that this is a difficult area. There are many unsettled questions, and you are the copyright lawyer. The courts will continue to work it out, and I think (and hope) that more obvious copyright abuses will be curtailed over time.
or are proprietary to the institutions using them. I’m not an expert, but companies are increasingly relying on synthetic, proprietary, or purpose-built datasets to train AI.
I'm aware of many products that use proprietary data for context training, but none that use only it (and PD work) for a foundation model.
In specialized domains more and more it’s a mix of public, proprietary, and synthesized data.
The debate about fair use will continue and should. I think having a purity test regarding sources is an unworkable approach. I think the more practical approach is asking the question: how do we keep this technology from violating on others’ rights without limiting its ability to help us solve problems. I’m not wholly optimistic or pessimistic, but I’m more worried about privacy in relation to AI than I am about whether the Grace character looks like Grace in a demo.
Bruh.
by Claude Errera
, Friday, April 03, 2026, 13:29 (10 days ago) @ Kermit
Claude, you're so good at projecting the vibe of "I'm just a neutral observer here." As an observer, you might also call out the broadsides weaving conspiracies or the sweeping catastrophizing statements. I was trying to say it's not that simple, and I feel a bit singled-out for not providing footnotes.
Apologies, I've been out of town and unable to look at this forum - it was wrong of me to call you out like that and then leave.
It wasn't so much that I was trying to be neutral, it was more that you were arguing in a way that was coming off as coy, and I was only struck by it because I read the whole thread in one go (well, the thread as it stood last week, I guess), and three separate posts from you (here, here, and here) contained comments that seemed to be teasing knowledge without any receipts. And to be clear - I wasn't calling out the making of an argument without backup... I was calling out the TONE of the argument. I read these three statements, in three different posts, within a couple of minutes of one another, and they painted a picture which you probably didn't intend at all: "...but I'm also hopeful about some potentially life-saving benefits", "I bet you're right.", and "You're focusing on the artistic realm. There is much happening beyond that." There were 2 full days between the first and last of those... but I read them all together, and the feeling that was generated was "Kermit knows something about this, but is choosing to tease instead of educate."
I'm sorry. I didn't mean to single you out. I was just looking to advance the discussion past "Yes it is!" "No it isn't!"
Claude, you're so good at projecting the vibe of "I'm just a neutral observer here." As an observer, you might also call out the broadsides weaving conspiracies or the sweeping catastrophizing statements. I was trying to say it's not that simple, and I feel a bit singled-out for not providing footnotes.
Apologies, I've been out of town and unable to look at this forum - it was wrong of me to call you out like that and then leave.It wasn't so much that I was trying to be neutral, it was more that you were arguing in a way that was coming off as coy, and I was only struck by it because I read the whole thread in one go (well, the thread as it stood last week, I guess), and three separate posts from you (here, here, and here) contained comments that seemed to be teasing knowledge without any receipts. And to be clear - I wasn't calling out the making of an argument without backup... I was calling out the TONE of the argument. I read these three statements, in three different posts, within a couple of minutes of one another, and they painted a picture which you probably didn't intend at all: "...but I'm also hopeful about some potentially life-saving benefits", "I bet you're right.", and "You're focusing on the artistic realm. There is much happening beyond that." There were 2 full days between the first and last of those... but I read them all together, and the feeling that was generated was "Kermit knows something about this, but is choosing to tease instead of educate."
I'm sorry. I didn't mean to single you out. I was just looking to advance the discussion past "Yes it is!" "No it isn't!"
Fair enough, and it's hard to convey in text that I intended there to be a little bit of teasing in that paragraph. Regarding my posts that you cite, some of that was me being lazy (in which case, I deserve criticism), and most of it was either me not having the time to flesh things out or me still gathering my thoughts. For instance, "I bet you're right" wasn't meant to be snooty (and I hope stabbim didn't interpret it that way), but a quick way to acknowledge that yes, we're probably defining our terms differently. Regardless, I don't think I can really think through something without writing about it, and I'm grateful to have this place to unwind my thinking and have my thinking challenged.
If you have specific examples, great, give them. ("It can be helpful for organizing" is not what I mean.) The issues with generative AI are (very often) specific to the ethical value of what's being created; if you have examples of generative AI being used in ways that are useful to humans without stealing from them, please, by all means, share.
Generative AI with respect to coding agents is probably going to revolutionize the engineering industry. From my perspective in specifically structural engineering, our digital tools have long suffered from a lack of development investment. Being a software developer for a niche structural analysis tool isn't exactly sexy when compared to something like developing a video game, so mostly those companies seem to rely on engineers that happen to have some coding background (a rarity). It's even worse for bespoke tools developed on a per-company basis - usually some random guy who knew VBA built some spreadsheet that no one understands, and then when that person leaves the company and the structural legal codes get updated, the tool gets dust binned because nobody has the expertise to updated it.
Communication between companies on projects is shackled by historical methods - very recently at my company the best practices for one workflow involved another company plotting numbers on drawings, and our company taking those numbers and transcribing them into our spreadsheets, creating new drawings with numbers based on those original numbers, giving those to another company, and then they plot those numbers on yet another drawing, and then we manually went through and checked each number on hundreds of new drawings. Nobody questioned that this was a reasonable approach until I got involved. I spent hundreds of hours developing a Matlab script to streamline this, which was my only option since it was the only coding language I was recently familiar with. If I hadn't had at least a minimal experience with programming in my background, they'd probably still be doing it the old way.
Now, in just the last 2 years, these AI chatbots and agents are so good at programming, what was taking me weeks now just takes a few hours or less. That means we can now continue to build and maintain useful tools in a reasonable amount of time, in any language we choose. Other people can iterate on those tools far more easily. Better tools will let us work faster, design more efficient things, and make less mistakes doing so. The biggest problem we all see right now is how we will need to come up with different methods of training - previously the best way to train new engineers was to have them do extremely tedious things over and over until they understood the simple thing inside-out. That will soon no longer be an option.
The training problem aside, I am quite optimistic that generative AI from a coding perspective will be a huge boost to the engineering industry. We've been working with one hand tied behind our back, and I envision AI as not only untying the arm, but maybe giving us a few more limbs to work with.
Bruh.
by Claude Errera
, Friday, April 03, 2026, 13:41 (10 days ago) @ squidnh3
If you have specific examples, great, give them. ("It can be helpful for organizing" is not what I mean.) The issues with generative AI are (very often) specific to the ethical value of what's being created; if you have examples of generative AI being used in ways that are useful to humans without stealing from them, please, by all means, share.
Generative AI with respect to coding agents is probably going to revolutionize the engineering industry. From my perspective in specifically structural engineering, our digital tools have long suffered from a lack of development investment. Being a software developer for a niche structural analysis tool isn't exactly sexy when compared to something like developing a video game, so mostly those companies seem to rely on engineers that happen to have some coding background (a rarity). It's even worse for bespoke tools developed on a per-company basis - usually some random guy who knew VBA built some spreadsheet that no one understands, and then when that person leaves the company and the structural legal codes get updated, the tool gets dust binned because nobody has the expertise to updated it.Communication between companies on projects is shackled by historical methods - very recently at my company the best practices for one workflow involved another company plotting numbers on drawings, and our company taking those numbers and transcribing them into our spreadsheets, creating new drawings with numbers based on those original numbers, giving those to another company, and then they plot those numbers on yet another drawing, and then we manually went through and checked each number on hundreds of new drawings. Nobody questioned that this was a reasonable approach until I got involved. I spent hundreds of hours developing a Matlab script to streamline this, which was my only option since it was the only coding language I was recently familiar with. If I hadn't had at least a minimal experience with programming in my background, they'd probably still be doing it the old way.
Now, in just the last 2 years, these AI chatbots and agents are so good at programming, what was taking me weeks now just takes a few hours or less. That means we can now continue to build and maintain useful tools in a reasonable amount of time, in any language we choose. Other people can iterate on those tools far more easily. Better tools will let us work faster, design more efficient things, and make less mistakes doing so. The biggest problem we all see right now is how we will need to come up with different methods of training - previously the best way to train new engineers was to have them do extremely tedious things over and over until they understood the simple thing inside-out. That will soon no longer be an option.
The training problem aside, I am quite optimistic that generative AI from a coding perspective will be a huge boost to the engineering industry. We've been working with one hand tied behind our back, and I envision AI as not only untying the arm, but maybe giving us a few more limbs to work with.
I can get behind all of this - I'm not an engineer, but I do enough backend work that I've seen many, many situations where people are doing things the hard way because nobody had any skills to make an easier way possible, and I've seen a LOT of 'good enough' solutions that were hacked together by a single talented person fail completely once that person moves on.
I have not yet seen the power of AI to ameliorate those problems personally... but I know that's because I haven't spent enough time learning how to make it happen. For me, it's a pretty important question - I want to retire, and I'm the guy whose moving-on is gonna break a lot of 'good enough' solutions. I guess I'm happy to hear success stories.
Perhaps this will help.
by Cody Miller
, Music of the Spheres - Never Forgot, Sunday, March 22, 2026, 18:29 (22 days ago) @ Kermit
edited by Cody Miller, Sunday, March 22, 2026, 18:34
If this means devs can’t control the output, that’s one thing.
THEY CAN ALREADY CONTROL THE OUTPUT. It's called a rendering engine, and games already do this! If they wanted it to look like this, it would for everybody.
It's a fucking black box dude. You can't control it any more than you can control the exact pixels midjourney makes for you when you do a prompt.
Others who have seen more of this say it is inconsistent from scene to scene. In other words, even more totally fucking useless.
The end result of Generative AI will be nothing less than the death of human culture. You need to resist it at every step. reject it and show no mercy, even in the smallest of instances. Not an exaggeration.