SUBSCRIBE TO THE SKINNY
February 21, 2023 @ 12:00am

Robot painting a picture

This image was generated by Open AI Dall*E 2, keywords: "The underground lair of an all knowing, all seeing mechanical artist."

 

So, 2023 has ushered in the era of legitimately applying artificial intelligence to the world of art. Recently, we’ve seen a pretty cool influx of AI into everything from our friends’ illustrated selfies to their animated social posts and, unless you’re living under a rock, you’ve read some of the AI-generated text from Chat GPT. And it’s all rather novel, fun, interesting, even alluring. Perhaps you’ve even signed up to scan your face to make some AI-generated keepsake, but as an industry professional, what do we really think, or even understand, about this next phase of technology?


At its core, this latest round of AI launch has upgraded the technology from a bunch of proof of concepts to actual products, and these products could have a considerable impact on life as we know it — probably both good and bad. It’s kind of like the era of centralizing data and building out the internet. When was the last time you went to a library or opened an encyclopedia? If you are anything like the people who live in my house, you shout myriad brain twisters to an invisible and all-knowing, always listening voice that lives in a disc on your kitchen counter, and she expertly answers questions — well, most of them — instantly. And usually this is a good thing. You can print your paper, send your email with confidence, or break up the argument brewing about “who is right.” But those are just pools of data, answers to questions, the correct order of letters in a word. What about when it comes to something more subjective such as the nuanced decisions of creating art, especially the kind of art that people pay for. Let’s call this “commercial art”: movies, music, advertising; everything outside of, let’s say, fine art or art made for the sole purpose of the artist simply expressing themselves for expression’s sake.

 

At its core, this latest round of AI launch has upgraded the technology from a bunch of proof of concepts to actual products, and these products could have a considerable impact on life as we know it — probably both good and bad.

 

AI, or artificial intelligence, is a system trained to make decisions based on painstakingly collecting millions, even billions, of nuances of artistic decision making. Right now, humans are still at the controls; someone (still flesh at this point) must “decide” what to train the AI on, but when it comes to art or music or film, the original project that can be used to feed the AI the data to build upon can be ANYTHING written or visual, and most often protected under copyright. However, when things have lapsed into the public domain, or the original artist is no longer alive, and many of these things already exist in the public, even with their strict copyright halos intact, we are all able to look at these objects, just not own or sell them without paying the source. And for that matter, the more obscure or old the images, language and sounds are, there more countless copies and versions that exist from people who have paid the tariff to own or distribute them. Most all of these can be looked at without permission and without anyone aware they are being consumed.

But there is a real difference to outright stealing the work. You see, AI and ML (machine learnings) are not simply duplicating the copyrighted or owned work and redisplaying or using it as a normal IP thief would. No, instead they are dissecting it, learning, and deciding the nuances of how it might have been created, comparing those pieces of data to any available resource online or other to start making choices about what it would do if those were its artistic foundations, and then building out those libraries alongside hundreds, eventually millions, of other decision libraries that can be cross referenced to make new decisions. Scary. To me anyway, but I am old-school skin and bones.

 

Illustration of creatives in an unemployment line

This image was generated by Open AI Dall*E 2, keywords: "Unemployment line filled with sad, well-dressed intellectuals."

 

ART IMITATING … ART

Take the Ouchhh AI:VanGogh experience. Nine hundred of Van Gogh’s oil paintings and 1,500 of his sketches were read and processed via AI to create 12 billion brush strokes that a computer-driven intelligence can intellectualize to determine how, where and when to place things. Making immense calculations and decisions allowing AI to generate work in the style of Van Gogh … forever. Making billions more “Van Gogh-like objects” until humanity is long gone, and then keep going. Is this a good thing? And while this experience was created to push the boundaries of public art and, in turn, create a new art form (AI art), someone paid for this massive undertaking and interactive display to be built, programmed, and housed. And someone could greatly profit on a lend/lease of this AI and its capabilities to many others. Van Gogh is long gone (having died in 1890), but would he have wanted his work included in this technology? Would any artist? At this point, does he/she have a say, or do we, as inheritors of their work, get to make that decision on their behalf? After all (and with all due respect) Van Gogh is gone, so does it even matter?

Karla Ortiz, concept artist, film stylist and wardrobe creator, recently told NPR’s “It’s Been a Minute”: “This is a level of exploitation I’ve never seen, to see your heart, soul and art without your permission, for profit is very, very invasive.” Ortiz has brought things to life in cinema for Marvel, Disney and is best known for her work transitioning the character of Dr. Strange from the comic books to its widely recognized film version. But the toil and creativity Ortiz brought to the nuances of that character (and many others) while owned by Marvel also belong to Ortiz. They are her artistic fingerprints. Since most of her work, via studio-releases, is widely available online, it exists in the open, without any real “limits” regarding what or how an artificial intelligence can use or interpret her part of the work to “learn something.” While Ortiz toiled a lifetime to develop a style that people find to be uniquely hers, her direction and decisions are at risk of no longer being privately owned or monetized by her once ML/AI learns it. So, is this process infringing on her copyright? Or is it doing something even more damning: stripping her of her individuality as an artist? After all, they are training a machine to deconstruct how Ortiz “got” to her final and marketable product. According to NPR, this has sparked a recent class-action lawsuit, of which Ortiz is a plaintiff, against Stability AI, Midjourney and Deviant Art accusing these companies of training AI on an artist’s specific work without the artist's permission.

 

“This is a level of exploitation I’ve never seen, to see your heart, soul and art without your permission, for profit is very, very invasive.”

— Karla Ortiz, concept artist, film stylist and wardrobe creator, to NPR’s “It’s Been a Minute”

 

On the contrary, don’t we sometimes use Pinterest and Instagram as fuel for our creative process, too? Aren’t we doing something similar by scanning things other smart, talented creators made for a sip of the free IP? To noodle around visually and cognitively for inspirational energy to influence the way WE think and create? Personally, I say yes. And I am guilty of it. And would I care if someone scoured my output of art, writing, music, and photography for learning cues? No. At least, not for now. But in 10 years, when I am no longer employable because I make buggy whips, I will probably change my answer, and take to the empty, dystopian streets in protest. Unless, as it usually does, society changes its approach to how this unfolds.

Right now, like most new beginnings, it's complicated, and running wild and without regulation.

And this is just art. A lot of people might not know it, but their data, and their behaviors, have been being tracked for years and continually fed into these AI modelers, first as a nonprofit scientific venture trying to answer the question, “Is it possible?” Once that question was answered and those behaviors were learned, AI suddenly became a marketable product, and the very fuzzy gray area began of, “Who owns what?”

 

A SOBERING REVELATION

And now for the dangerous and far reaching implications of all of this: According to several sources, from WIRED to ARSTECHNICA, researchers, at this point, are unsure whether an AI can be taught to forget something it learned without redesigning the entire model from scratch. That doesn’t bode well for the ROI of any conceptual machine intelligence project, and how can something like that even be successfully validated (or worse yet, regulated?) And in the end, who is the advocate for the artist in this equation? Or even for us as lay-humans?

I asked a sampling of industry creatives for their thoughts. I asked them specifically about their vantage point before and after having the understanding that everything they ever wrote, photographed, painted, played, designed, texted, scribbled in a diary could forever belong to someone (or something) else. As in “MINE! No take backs.” I wanted to know whether once they knew their “essence” could be mapped into something artificial, would they feel the same way about sharing it? The results were a bit surprising. I started with the question, “Is art 'art' if it’s not made by a human?

Justin S., a technical project manager for Amazon, said, “if you are the artist, probably not, because your experience is removed, but if you are not the artist, then it is art, because the viewer won’t know.” He went on to say (in his experience) that there is generally no revenue in the generation of art, and from that standpoint he exclaimed, “let the machines learn!”

Johnny B., a touring standup comic and copywriter, said, “I think art can be art if it’s not made by humans, but there is a big asterisk; if a dog plays in mud and walks across paper, that’s not art. Not unless someone sees it, frames it, and hangs it on their wall; then it becomes art.” He went on to say that he would not share any of his comedy, and "it’s kind of a buyer beware / no refunds type of thing, especially this early in AI.”

Art historian, fine artist and musician William S. answered, “Yeah, I don’t think I would have a problem recognizing something beautiful made by an AI, a monkey or an alien. If it’s AI we’re talking about, then a human made the AI, which made the art.” He went on to say he would freely give permission for a painting, drawing or musical composition to an AI project to create more art.

When each of them was asked whether they would change their answers if they knew AI couldn’t unlearn something, most were united in their responses. I think Justin summed it up best: “No, because I think I’m desensitized already because the internet already doesn’t forget.” There was one artistic holdout, Chicago-based photographer Jeffery J., who said, “It may as well be a twist on that old George Bernard Shaw quote, ‘Those who can do — do, and those who can’t — teach”, although now it might read ‘those who can’t — write code.’ ”

 

Robot painting a picture on an easel of a tree in the background

This image was generated by Open AI Dall*E 2, keywords: "A machine learns to paint fine art."

 

UNAWARE DOESN’T MEAN UNINVOLVED

Truth be told, our data is already being used, without consent, to design systems to act more human, to make decisions, and to make us more comfortable with spending time with technology. All inspired by investors to boost a user’s percentage of time online and increase revenue through purchase and ad revenue. Where do these data troves start and stop (imagining you, like me, reading this and thinking, I wonder what e-signatures or facial scans or PI data I have given to open an app, or to quickly download an asset, because that number is in the hundreds if not thousands.)?

At the end of the day, we have made a trade with data — if we need it and want it, we are willing to blindly enter contracts that could have many, many unintended consequences. Case in point: During your last iPhone update, did you read or even look at your latest terms of conditions from Apple, or did you just click “agree” to get back to TikTok? And worse yet, even when the dust settles, and if the system were to eventually side with the individual, it probably will be too late. The millions of instances of your data will simply be irretrievable, forever enmeshed into the world’s data.

 

At the end of the day, we have made a trade with data — if we need it and want it, we are willing to blindly enter contracts that could have many, many unintended consequences.

 

Full disclosure: As a 20-year “creative” in the industry, I am not concerned that AI and ML will take over our jobs or our lives. Perhaps AI will eventually prevail, but right now I focus more of my concern on these Wild West days of this emerging tech, and how private capital funding will propel this to a sharp and fast rise. Putting profits over politics many will be chasing that “Wish I had bought Google stock at $25 or Apple at $11,” because greed, especially that kind of greed, can fuel a sinister rise in human collaboration and problem solving (the not-so-good kind) and we will all, as a human race, be in more trouble if/when that singularity dawns on us. Suffice to say, it will be more disheartening than figuring out which agency is hiring bots over people.

And judging by how many times I need to shout, “Alexa … OFF!” I am fairly certain that we are still in the nascent days of this emerging technology.

About the author: Jim Toth

comments powered by Disqus