The thing about the tragic early demise of Steve Jobs is we never got a “real” smart TV from Apple and of the people who could convince millions of people to wear VR / AR goggles in public — he was it. So, finally, Apple apparently may be on the cusp of releasing it’s on AR / VR goggles.
All things being equal, Apple should begin to work on transitioning its entire user base to using such goggles instead having an iPhone combined with a laptop / desktop. Everything SHOULD go through the googles to the point that your entire life would revolve around them, especially the AR part of it all.
And, yet.
People still just aren’t prepared to walk around in public wearing AR / VR goggles because they would fear they looked like idiots. But, again, the only company I can imagine that might be able to pull such a feat off is Apple. They have a sense of style that all the other computer companies lack.
What’s even more interesting is we could one day soon see not just AR / VR goggles…but AI powered AR / VR goggles. Now THAT would be pretty cool, I have to say. Convincing millions of people to wear such goggles would be the basis of a few trillion dollar service industry.
The thing I don’t think we’re thinking enough about is how we’re careening towards a future where there pretty much isn’t anything for humanity to do other than smoke a bowl while playing video games.
Should we get anywhere near to Artificial General Intelligence, there simply may not be any tasks left for humans to do. And given how angry, unhappy and bitter humans become when they don’t have work to distract them, this could make the entire world very, very unstable.
The whole notion of “prompt engineer” is extremely short sighted. It’s the type of job that we think will exist to make ourselves feel better. But if we reach a “Her” like future….why would there be a need for prompt engineers? Your AGI digital assistant would know you so well that it might even be able to preemptly answer your question before you ask it.
So, it seems to me that it’s at least possible that the real danger of AGI isn’t AGI, it’s restless humans. And it could be that any attempt to regulate AGI is moot because if America doesn’t let AGI do this or that thing, some other country will so we’ll feel a competitive demand to not regulate.
It’s very possible that we may see the rise of some sort of Neo-Luddite movement that….grows violent in some way. Implementing a UBI would only go so far. Human nature is such that for every 1 person who writes the Great American Novel with all the time a UBI affords, there will be a 100 Type-A people who will want to burn everything to the ground because they can’t make $1 billion.
Anyway, the point is — we have to take some of the darker possibilities of the AGI revolution more seriously.
In the end, I think all my dreams of someone cherry picking the best bits of the Usenet UX to design a “Twitter Killer” said more about my dissipated youth than anything else. No one was ever going to listen to me and the only way it was ever going to become reality was if I learned to code and showed people my vision in a practical manner.
As it is, lulz.
So, in a sense, it was all a huge waste of time. And, yet, I also think the same foolish and obsessive element of my personality that led me to rant about my dream of bringing back Usenet in some form has helped me when it comes to working on a novel.
There is that, I guess.
Anyway, I only even mention it again because someone from California did a Google search that led them to some of my writings about the Usenet UX. I have no idea who they were or their motives, but it reminded me of what we lost in social media UX over the last 30 years.
The funny thing about it all is, of course, that we’re zooming towards a whole different era in technology based around the metaverse and AI (AGI?) So, yeah. I need to stop dwelling on Usenet and throw myself into working on my first novel before even novel writing has been co-oped by the ravious chatbot revolution.
Where to begin with this one. So, rather than hashing out the important implications of the looming chatbot revolution like the potential need for a UBI, here we are fighting over how a chatbot wouldn’t use a racial slur to save millions of people.
For a group of people who grow ever-so-offended at the suggestion that they’re fucking racists, MAGA people sure do think up reasons to use racial slurs out of the blue.
This is so monumentally dumb and so much one of those things that is at the nexus of Incel-MAGA-Pothead culture that the fact that I feel forced to address it makes the whole issue even more grating on the nerves.
The reason why this is so dangerous is it’s simple for idiot MAGA people to process. Remember, for four years the president of the United States ranted about people not saying “Merry Christmas.” So, as such, it’s easy to imagine Trump ranting about how if only chatbots weren’t bias (relative to racist, misogynistic MAGA cocksuckers) there would be peace on earth and America would never again be threatened by a Chinese balloon.
And something something Hunter Biden’s laptop and or peen.
The chatter about “woke chatbot bias” is growing at alarming rate on Twitter. So, logically, even ding-dong Trump is going to eventually pick up the idea of “ending chatbot bias” as a political issue in the 2024 election. The more cultural weight we give chatbots, the more the calls for “regulation” to “end chatbot bias” will grow.
The question of course, is the enteral “who watches the watchers.” If the very idea objective truth doesn’t exist in the minds of MAGA, does that mean the only way you can produce a chatbot that doesn’t have a “bias” is if they are bias in favor of MAGA?
That particular question answers itself, I’m afraid.
So, if we don’t have a civil war starting in late 2024, early 2025, we’re probably going to be an autocracy and, as such, whenever we use a chatbot we’re going to have to wade through it saying the n-word for a few minutes before we get the answer to the question of “why is the sky blue?”
When I was a very young man, it occured to me that we might create our own aliens should AI (AGI) ever come into being. Now, many years later, I find myself dwelling upon the same thing, only this time in the context of the historical significance of the coming chatbot (and eventually potentially the AGI) revolution.
If we create “The Other” — the first time Humans would have to deal with such a thing since the Neanderthals — what would be the historical implications of that? Not only what would be the historical equivalent of creating The Other, but what can history tell us about what we might expect once it happens?
Well, let’s suppose that the creation of The Other will be equal to splitting the atom. If we’re about to leave the Atomic Age for the AGI Age, then…what does that mean? If you look at what happened when we first split the atom, there were a lot and I mean A LOT of hairbrained ideas as to how to use nuclear power. We did a lot of dumb things and we had a lot of dumb ideas about essentially using a-bombs on the battlefield or to blow shit up as need be for peaceful purposes.
Now, before we go any further, remember that things would be going much, much faster with AGI as opposed to splitting the atom. So, as such, what would happen is a lot of high paying jobs might just vanish virtually overnight with some pretty massive economic and political implications. And, remember, we’re probably going to have a recession in 2023 and if ChatGPT 4.0 is as good as people are saying, it might be just good enough that our plutocratic overlords will decide to use it to eliminate whole categories of jobs just because they would rather cut jobs that pay human being a living wage.
If history is any guide, after much turmoil, a new equilibrium will be established, one that seems very different than what has gone before. Just like how splitting the atom made the idea of WW3 seem both ominous and quaint, maybe our creation of The Other will do a similar number on how we perceive the world.
It could be, once all is said and done, that the idea of the nation-state fades into history and the central issue of human experience will not be your nationality but your relationship to The Other, our new AGI overlords.
We are rushing towards a day when humanity may be faced with the issue of the innate monetary value of human created art as opposed to that generated by non-human actors. If most (bad) art pretty much just uses a formula, then that formula could be fed into a chatbot or eventually an AGI and….then what? If art generated by an chatbot or an AI equal to a bad human generated movie…does that require than we collectively give more monetary value to good art created by humans?
While the verdict is definitely still out on that question, my hunch is that the arts may be about to have a significant disruption. Within a few years (2029?) the vast majority of middling art, be it TV shows, novels or movies, could be generated simply by prompting a chatbot or AGI to created it. So, your average airport bookstore potboiler will be written by a chatbot or AGI, not a human. But your more literary works might (?) remain the exclusive domain of human creators.
As and aside — we definitely need a catchy names to distinguish between art created by AGIs and that created by humans. I suppose “artisanal” art might be something to used to delineate the two. But the “disruption” I fear to the arts is going to have a lot of consequences as it’s taking place — we’re just not going to know what’s going to happen at first. There will be no value, no narrative to the revolution and it will only be given one after the fact — just like all history.
It could be really scary to your typical starving (human) artist as all of this being shaken out. There will be a lot of talk about how it’s the end of human created art…and then we’re probably going to pull back from that particular abyss and some sort of middle ground will be established.
At least, I hope so.
Given how dumb and lazy humans are collectively, human generated art could endup something akin to vinyl records before you know it. It will exist, but just as a narrow sliver of what the average media consumer watches or reads. That sounds rather dystopian, I know, but usually we gravitate towards the lowest common denominator.
That’s why the Oscars usually nominate art house films that no one actually watches in the real world. In fact, the Oscars might even be used, one day, as a way to point out exclusively human-generated movies. That would definitely be one way for The Academy to live long and prosper.
It seems as though humans just can’t be content with something as interesting as OpenAI ChatGPT. Either they’re angry that they can’t use it to destroy humanity, or they’re angry that it doesn’t allow for “unlimited tokens” or they want it to provide an absolutely “objective” response as long as it agrees with their extremist views.
It’s all very frustrating.
We’re just unwilling or unable to just have a wait-and-see approach to such an interesting development. It seems as though we won’t be happy until we’re all being bribed by an AGI via the use of UBI. Even though, of course, there will be a vocal minority of people who are angry because the UBI isn’t high enough. Or it’s too high or whatever.
We continue to seem to be careening towards not just the Singularity, but something of a political perfect storm. If we’re very unlucky, both things will happen at just about the same time — late 2024, early 2025.
It definitely seems as though if there is a severe recession in 2023 that it’s going to prompt something akin to a chatbot revolution because of capitalistic determinism.
The thing I’ve noticed is that the people most at risk for being hurt by the coming Singularity — younger people and people who maybe aren’t as sophisticated — are the very people the most in awe of ChatGPT.
But one thing that is absolutely clear — humans are very, very lazy. And combine that will how we’re hard wired to “pray” to a “god” it definitely seems like there won’t be any need for a violent “Judgement Day” on the part of any AGI — we’re just going to give up. We’re just going to collectively hand over our agency to an AGI because thinking is hard and as long as we get paid and can play video games, we will be very pliant to anything the AGI may want us to do.
From what I can tell on Twitter and people using OpenAI ChatGPT to take their finals for them — we are all very fucked. People — and especially young people — are very, very lazy. They somehow thing that by not putting the work in and using at chatbot to pass a test that they are somehow gaming the system when, in fact, they’re just hurting themselves.
This is probably the closest we’ve ever come to the Singularity. And it’s only going to get worse. We’re probably going to be awash in chatbots soon enough and, what’s worse, we’re may be well on our way to Artificial General Intelligence by as soon as the end of the decade.
So, either we’re going to have to restructure significant parts of our society to address how fucking lazy people are, or we’re going to find that by default our entire civilization revolves around asking a better question of a chatbot or AGI.
We are just not prepared for what may be about to happen.
The more I look at Twitter as we experience something akin to a mini-Singularity associated with OpenAI ChatGPT, the more alarmed I find myself becoming. Not with the idea that there might be some sort of Terminator-like “Judgement Day,” but, rather the exact opposite.
There will be no need for our AGI overlord to blow the world up — humans will be so busy giving up any agency they might have to the decision making ability of an AGI. Given with the faux-AGI of a really smart chatbot, I see people pretty much just turning everything over to it.
Students no longer want to study. Adults think they can just have the chatbot do their work while they stroke one out to porn. Meanwhile, worst of all, all the same partisan bullshit we’ve seen with every other element society is now corrupting the chatbot revolution.
As such, THE political and societal issue of 2023 and beyond could very well be who gets to regulate the “bias” of chatbots (and, later AI.) When everything hinges on the “objective truth” of a chatbot because people are fucking lazy and refuse to talk to each other because of politics, then the whole woke vs. unwoke debate becomes white hot.
In the Second Trump Administration, I could see there being some sort of FCC-like agency design specifically to “purge” AI of any “woke” bias because children no longer learn anything but, instead, are trained on how to ask better questions of chatbots.
MAGA Nazis will scream bloody murder if a chatbot doesn’t give them the answer to “what is a woman” that they expect. They will say that chatbots are nothing more than CRT shills infected with the “mind virus” of the “woke cancel culture mob.”
I’m not exaggerating. That’s exactly what is going to happen — you see it already on Twitter with the usual fuckwit MAGA Nazi “thought leaders” whining that they can’t get their hate validated by a chatbot.
Of course, there is the even more darker scenario where the United States splits into to nations, one Red, one Blue and while Blue America is enjoying the fruits of an unfettered Singularity, Trumplandia will use chatbots to atomize Red States into a techno-autocratic state. Good times!
I think some of all of this comes from how the human mind is hard wired to believe in a god. So, people, presented with something like a chatbot, fall into the trap of thinking they can “pray” objective questions to the chatbot and get some sort of “objective revealed truth” that they can spread to the world. All you need is a burning bush and some stone tablets and it’s a story as old as history.
In short — we’re fucked. Humans are just too lazy to put up much of a fight with chatbots or AGI. It will be interesting to how all of this ultimately shakes out.
The argument can legitimately be made that there may come a point, in the not-too-distant future when software development will be reduced to simply asking a chatbot a well-crafted question.
Now, I say this because it definitely seems as though the cat is out of the bag when it comes to the potential of chatbots. Now that The Powers That Be are aware of what chatbots can do, the natural inclination of capitalism is to replace most programmers with a chatbot.
This won’t happen overnight — if ever — but it is a risk. It’s easy to imagine the software design industry being among the first industries to become moot because of chatbots.
What would be the consequence of this?
It’s possible that if a lot of young, wealthy men will suddenly lose their jobs. That, in turn, could cause something akin to a neo-Luddism. If nothing else, we’re in for a very bumpty few years as we figure out what we’re going to do as more and more human tasks are taken over by non-human actors.
When it becomes clear that chatbots could be just as big a cultural and economic revolution as the Internet, all bets are off as to what happens next. Buckle up.
You must be logged in to post a comment.