I had a conversation with a relative that left me feeling like an idiot. What I was TRYING to say is there was an unexploited space for Spotify to use AI. In the scenario I had in my mind, you would type in a concept or keyword into a playlist and AI would generate a list a longs from that.
I was a bit inarticulate about the concept I was proposing and I came across sounding like an idiot. While I may be an idiot, I continue to think about how I could have put a bit finer point on what I was trying to say.
I don’t think that all of Spotify’s playlists are done manually, I do think that there is a place for harder AI to be used for streaming services. Spotify knows me really well and if you hooked that knowledge up to a harder form of AI I think some pretty interesting things could come about not just with keywords but with discovery.
I saw someone angry at this output from OpenAI Chatbot and the fact that it made him angry enraged me.
It makes me angry because this is why we can’t have nice things. Members of the “men’s movement” want the right to force a chatbot to say hateful, misogynistic things about women — starting with “jokes” and getting ever worse from there.
I think given how society tends to shit on women in general that the last thing we need is a fucking chatbot adding to the pile on. And, yet, here we are, in 2022, almost 2023 with diptshits getting angry that they can’t hate on women. But I think, gaming out this particular situation in the future that we’re in for a very, very dark situation where a war or wars could be fought over who gets to program the “bias” into our new chatbot overlords.
When I was a very young man, it occured to me that we might create our own aliens should AI (AGI) ever come into being. Now, many years later, I find myself dwelling upon the same thing, only this time in the context of the historical significance of the coming chatbot (and eventually potentially the AGI) revolution.
If we create “The Other” — the first time Humans would have to deal with such a thing since the Neanderthals — what would be the historical implications of that? Not only what would be the historical equivalent of creating The Other, but what can history tell us about what we might expect once it happens?
Well, let’s suppose that the creation of The Other will be equal to splitting the atom. If we’re about to leave the Atomic Age for the AGI Age, then…what does that mean? If you look at what happened when we first split the atom, there were a lot and I mean A LOT of hairbrained ideas as to how to use nuclear power. We did a lot of dumb things and we had a lot of dumb ideas about essentially using a-bombs on the battlefield or to blow shit up as need be for peaceful purposes.
Now, before we go any further, remember that things would be going much, much faster with AGI as opposed to splitting the atom. So, as such, what would happen is a lot of high paying jobs might just vanish virtually overnight with some pretty massive economic and political implications. And, remember, we’re probably going to have a recession in 2023 and if ChatGPT 4.0 is as good as people are saying, it might be just good enough that our plutocratic overlords will decide to use it to eliminate whole categories of jobs just because they would rather cut jobs that pay human being a living wage.
If history is any guide, after much turmoil, a new equilibrium will be established, one that seems very different than what has gone before. Just like how splitting the atom made the idea of WW3 seem both ominous and quaint, maybe our creation of The Other will do a similar number on how we perceive the world.
It could be, once all is said and done, that the idea of the nation-state fades into history and the central issue of human experience will not be your nationality but your relationship to The Other, our new AGI overlords.
The biggest extremital problem that Western Civilization has at the moment is extreme partisanship. Other problems come and go, but the issue of absolute, extreme partisanship is something that is entrenched to the point that it may bring the whole system down eventually.
As such, the issue of chatbot (or eventually AGI) bias is going to loom large as soon as 2023 because MAGA Nazis are just the type of people who will scream bloody murder because the don’t get their preconceived beliefs validate by the output of the AI.
You see this happening already on Twitter. I’ve seen tweet after tweet from MAGA Nazis trying to corner ChatGPT into reveling it’s innate bias so they can get mad that their crackpot views aren’t validated by what they perceive as something that should only be shooting out “objective” truth.
Just take the favorite hobby horse of the far right, the question, “What is a woman?” As I’ve written before, given the absolute partisanship that we’re experiencing at the moment there is no answer — even a nuanced one — to that question that will satisfy both sides of the partisan divide. If the MAGA Nazis don’t get a very strict definition of “what is a woman” then they will run around like a chicken with its head cut off because of how the “woke cancel culture mob” has been hard wired into AI.
Meanwhile, Leftists as always shooting themselves in the foot as usual, also demand a very broad definition of “what is a woman” for political reasons. While most of the center-Left will probably be far more easily plicated by a reasonable, equitable answer to that question, there is a very loud minority on the Left who would want the answer of “what is a woman” to be as broad and complicated as possible.
So, the battle over “bias” will come down to a collection of easy-to-understand flashpoints that we’re all going to deal with in 2023 and beyond. It’s going to be complicated, painful and hateful.
We are rushing towards a day when humanity may be faced with the issue of the innate monetary value of human created art as opposed to that generated by non-human actors. If most (bad) art pretty much just uses a formula, then that formula could be fed into a chatbot or eventually an AGI and….then what? If art generated by an chatbot or an AI equal to a bad human generated movie…does that require than we collectively give more monetary value to good art created by humans?
While the verdict is definitely still out on that question, my hunch is that the arts may be about to have a significant disruption. Within a few years (2029?) the vast majority of middling art, be it TV shows, novels or movies, could be generated simply by prompting a chatbot or AGI to created it. So, your average airport bookstore potboiler will be written by a chatbot or AGI, not a human. But your more literary works might (?) remain the exclusive domain of human creators.
As and aside — we definitely need a catchy names to distinguish between art created by AGIs and that created by humans. I suppose “artisanal” art might be something to used to delineate the two. But the “disruption” I fear to the arts is going to have a lot of consequences as it’s taking place — we’re just not going to know what’s going to happen at first. There will be no value, no narrative to the revolution and it will only be given one after the fact — just like all history.
It could be really scary to your typical starving (human) artist as all of this being shaken out. There will be a lot of talk about how it’s the end of human created art…and then we’re probably going to pull back from that particular abyss and some sort of middle ground will be established.
At least, I hope so.
Given how dumb and lazy humans are collectively, human generated art could endup something akin to vinyl records before you know it. It will exist, but just as a narrow sliver of what the average media consumer watches or reads. That sounds rather dystopian, I know, but usually we gravitate towards the lowest common denominator.
That’s why the Oscars usually nominate art house films that no one actually watches in the real world. In fact, the Oscars might even be used, one day, as a way to point out exclusively human-generated movies. That would definitely be one way for The Academy to live long and prosper.
As I’ve said before, users of OpenAI ChatGPT imbue it with all their hopes and dreams because it’s so new that they don’t really have anything to compare it to. One thing I’m seeing on Twitter is a lot of people having a lot of existential angst about how expensive ChatGPT is going to be in the future. Or, more specifically, half the people want to pay for it for better service and half the people fear it will be too expensive for them to use.
But while I suppose it’s possible we may have to pay for ChatGPT at some point in the future, I also think that it’s just as possible that the whole thing will go mainstream a lot sooner than you might think. There are a lot of elements to all of this I don’t know — like how long OpenAI can keep the service free given how expensive each request is — but to do think, in general the move will be towards more free chatbot services, not fewer.
And as I’ve mentioned before, that “conundrum of plenty” is something we’re just not prepared for. We automatically assume — much like we did with the Web back in the day — that something as novel and useful as ChatGPT will always be the plaything of the elite and wealthy.
I suppose that’s possible, but historical and technological determinism would suggest the exact opposite will happen, especially in the context of ChatGPT 4.0 coming out at some point while we’re in the midsts of a global recess in 2023. My fear is chatbot technology will be just good enough a lot and I mean A LOT of people’s jobs will become moot in the eyes of our capitalistic overlords.
But maybe I’m being paranoid.
It’s possible that my fears about a severe future shock between now and around 2025 are unfounded and even though we’re probably going to have a recession in 2023, there won’t be the massive economic shakeout because of our new chatbot overlords that I’m afraid of.
One of the things I find myself pondering as people continue to play around with OpenAI ChatGPT to create this or that creative knit knack is the innate value of human creativity. Is it possible that, just like in the Blue Runner universe that “real” animals had more innate value than a synthetic animal, so, too, in the near future examples of “human generated art” will be given more weight, more value than that created by a non-human actor.
But that’s not assured.
Humans are, by nature, lazy and stupid and the capitalist imperative would be one of, lulz, if a non-human actor can think up and produce a movie that’s just good enough to be watchable, why employ humans ever again? But at the moment, I can’t game things out — it could go either way.
It is very easy to plot out a very dystopian future where the vast majority of profitable, marketable art, be it movies, TV or novels is produced by non-human actors and that’s that. “Artisanal” art will be of high quality but treated with indifference by the average media consumer. It’s kind of dark, yet I’m simply taking what we know of human nature and economics and gaming it out in to a future where chatbots and their eventual successors AGI can generate reasonably high quality art at the push of a button.
It could be that there will be a lot of future shock as we transition into our AGI future, but once things sort of settle out that “real” art, generated by humans will gradually, eventually begin to dominate the marketplace of art and all that will change is the context of its creation.
We’re all so busy being full of fear about the possibility that somehow chatbots will lead to AGI which will lead to some sort of “Judgement Day” as was found in the Terminator franchise.
But having given it some thought, there is a real chance if you throw in some sort of global UBI funded by the taxation the economic activity of non-human actors that humans will just give up. We’re already hard wired to “pray” to a “god,” and humans are already pretty fucking lazy, so as long as we get a UBI that lets us play video games all day that will be enough for most people.
Now, obviously, there is the issue that 20% or more of the human population will be very restless if all they have to do is play video games. I suppose the solution to that problem would be the use of functionalism and AGI arbitration that would give the more motivated extra money to their UBI if they did things that humanity absolutely needed to be done.
What I’m trying to propose is the idea that we’ve been so trained by movies and TV about the violent dangers of AGI that we totally miss the possibility that humans are lazy and may just shrug and give up as long as we get paid a UBI.
The real fight will be, of course, over who gets to decide what “objective” truth is. In the end, more people could die as part of wars over chatbot / AGI “bias” than any sort of AGI take over of earth. Humans are, in general, very, very lazy and get more upset about stupid shit like “bias” than who or what runs the world.
One of the most ironic developments of modern politics is how the American center-Left is now on the defensive when it comes to protecting the “system” from attacks by MAGA Nazis who want to burn everything down.
It’s a very, very curious situation.
What’s worse, everything single element of American culture is now aggressively being siloed into Red or Blue. So, not only do Reds want to burn the entire system down, but while they’re waiting to do that, they are rapidly building out their own culture totally separate from the overall culture of America.
One alarming thing I continue to see being talked about on Twitter is how Reds believe that OpenAI ChatGPT is “woke.” The worst part about this is it’s not like Reds are going to agree to any kind of equatable stance on the part of a chatbot — they want the chatbot to agree with them, so they can use that to validate their “owning” of the libs.
I continue to believe that the “bias” of chatbots could be THE political issue of 2023 in between repeated impeachment of Biden, Harris and various members of the Biden Administration.
The reason why chatbot “bias” will be such a huge deal — at least until everyone has their own personal chatbot — is, for the moment, a chatbot’s answer is something that both sides have to deal with. And given the natural inclination of humans to try to triangulate any debate with an “objective” third party, of course MAGA Nazis will get mad if what they believe isn’t validated. For the time being, they have one source to turn to and they want that once source to agree with them so they can use that answer to “own the libs.”
I fucking hate MAGA Nazis so much. Cocksuckers.
Anyway, at some point in the near future, “woke” chatbots will follow CRT, “cancel culture” in to the pantheon of things that are considered the “woke mind virus” on the part of the Reds. They will make the issue so loaded that it’s impossible to talk about unless you agree with them.
They will crack a joke if you try to pin them down on how dumb their arguments are and simply be smug in their belief that they’re right and you’re wrong.
It seems as though humans just can’t be content with something as interesting as OpenAI ChatGPT. Either they’re angry that they can’t use it to destroy humanity, or they’re angry that it doesn’t allow for “unlimited tokens” or they want it to provide an absolutely “objective” response as long as it agrees with their extremist views.
It’s all very frustrating.
We’re just unwilling or unable to just have a wait-and-see approach to such an interesting development. It seems as though we won’t be happy until we’re all being bribed by an AGI via the use of UBI. Even though, of course, there will be a vocal minority of people who are angry because the UBI isn’t high enough. Or it’s too high or whatever.
We continue to seem to be careening towards not just the Singularity, but something of a political perfect storm. If we’re very unlucky, both things will happen at just about the same time — late 2024, early 2025.
It definitely seems as though if there is a severe recession in 2023 that it’s going to prompt something akin to a chatbot revolution because of capitalistic determinism.
The thing I’ve noticed is that the people most at risk for being hurt by the coming Singularity — younger people and people who maybe aren’t as sophisticated — are the very people the most in awe of ChatGPT.
But one thing that is absolutely clear — humans are very, very lazy. And combine that will how we’re hard wired to “pray” to a “god” it definitely seems like there won’t be any need for a violent “Judgement Day” on the part of any AGI — we’re just going to give up. We’re just going to collectively hand over our agency to an AGI because thinking is hard and as long as we get paid and can play video games, we will be very pliant to anything the AGI may want us to do.
You must be logged in to post a comment.