Communication Implications of AI-Generated Models

As the fashion industry increases use of AI-generated models, students might explore whether communication can play a role in preventing the perpetuation of body-image ideals.

PBS News Hour reports how the industry is deploying AI to reduce costs. One concern is whether viewers will feel increased pressure to achieve a perfect body. Some argue that AI images decrease body-image pressure because viewers know they are not real and, therefore, are unattainable. But, at a minimum, that would require disclosure—clear labeling—that images are AI-generated.

We have no standard, requirement, or means of enforcement for such messages today. However, we do see similar regulations from the Federal Trade Commission (FTC) for product endorsements, with influencers admitting “paid partnership” or identifying sponsorships. Similarly, the FTC requires companies to label ads with “actor portrayal” or “dramatization” when people provide testimony, for example, for a pharmaceutical drug.

Students might explore whether something similar could work for the fashion industry. Still—even with the clearest messaging—could AI models do harm? The potential for comparison may still exist, as it does today. We know models’ images are Photoshopped, but that doesn’t seem to reduce young people’s aspiration or their self-harm to achieve ideals. There’s just so much communication can do.

Image source.

Previous
Previous

Broligarchy and Skibidi Added to Cambridge Dictionary

Next
Next

“Rigged” Data Questions in Business Communication