A Disturbance In The Force

by Shelt Garner
@sheltgarner

Besides seeing my ever-present stalker who seems WAY TOO INTERESTED in me for some reason, I’ve noticed something else a bit odd in my Webstats. Now and again over the last few days I’ve seen people obviously looking at links to this site from a Slack discussion. I’ve also seen some very random views from Microsoft of all things.

My best guess is all my ranting about AGI has caught someone’s attention and they are curious as to who I am. This is extremely flattering, given that absolutely no one listens to me for any reason. Some of the things they have looked at, however, are extremely random, which leads me to believe there’s a lot going with this site that I just can’t see using my Webstat software. It’s possible that there’s a lot more poking a prodding of my writing — to the point of potential due diligence — that I’m just not seeing.

Anyway, I’m generally grateful for any attention. As long as your not an insane stalker.

Maybe I Should Become An AGI Ethicist

by Shelt Garner
@sheltgarner

One of my favorite characters in fiction is Dr. Susan Calvin, robot psychiatrist. Given how many short stories there are to potentially adapt, I have recent come to believe that Phoebe Waller-Bridge would be the perfect person to play the character in a new movie franchise.

A future Dr. Susan Calvin?

I am also aware that apparently one hot new career field of late is being an “AGI Ethicist.” But for, well, (waves hand) I think I would be a great one. I love to think up the worst possible scenario to any situation and I think a lot. But I’m afraid that ships has sailed.

I’m just too old and it would take too much time to learn all the necessary concepts surrounding the field to do formalize my interest. So, it’s back to being an aspiring novelist — if human novelists are even a thing by the time I try to query this novel I’m working on.

Given we may be about to enter a severe recession in 2023 and recessions are usually when there’s a lot adoption of new technology…I may not be too hysterical to fear novelists may be quaint by late 2023 – early 2024.

It does make one think of what jobs will still exist if you combine AGI, automation and robotics. These are macro trends that are all coming to a head a lot sooner than any of us might have otherwise expected. Given what’s going on with chatbot technology the current moment in time definitely seems like the quiet before the storm.

The years 2023 ~ 2025 could be some of the most significant in human history if we’re trying to solve the political problem of Trump at the same time the Singularity is happening all around us. Good luck.

The Quest For Fire

by Shelt Garner
@sheltgarner

The thing I’ve noticed about the OpenAI chatbot is how badly people want to use it instead of Google, even though for various reasons that’s just not practical at the moment. But it is telling that this gives us some insight into where the market wants to go.

In the mind of the consumer, there would be a natural progression from Google to something like the OpenAI chatbot. To the point that real-world consumers are chomping at the bit to replace Google with it, even though it’s not connected to the live Web at the moment.

The key thing reason why OpenAi’s chatbot is a tipping point is it’s the first time when people can see for themselves in a real world setting what existing AI is able to do. As such, it definitely seems as though soon enough Google is going to face an existential choice — either come out with its own chatbot style interface for search or risk being eaten alive.

Because it definitely seems as though the rush is now on for different companies to come out with chatbots that are open to the public. And I think that’s something people are being a little naïve about — they are seeing the OpenAI in a vacuum, as if Google, Facebook and Apple aren’t all going to eventually come out with their own chatbot techology.

In fact, Google already has a chatbot so advance that someone thinks it’s AGI! So, it’s reasonable to assume that OpenAI should enjoy its moment in the sun while it can. It’s very possible that within a few years there will be a number of similar advanced chatbots for people to chose from.

The reason issue is, of course, who develops the first true hard AI, the first true AGI. THAT would be the Singularity and whoever managed to pull that off would find their company cited in the history books as pretty much re-inventing fire.

AGI’s ‘Rain Man’ Problem

by Shelt Garner
@sheltgarner

While the idea that an AGI might want to turn all the matter in the universe into paperclips is sexy, in the near term I fear Humanity may face a very Human problem with AGI — a lack of nuance.

Let me give you the following hypothetical.

In the interests of stopping the spread of COVID19, you build an air quality bot hooked up to an AGI that you put all over the offices of Widget Inc. It has a comprehensive list of things it can monitor in air quality, everything from COVID19 to fecal material.

So, you get all excited. No longer will your employees risk catching COVID19 because the air quality bot is designed to not only notify you of any problems in your air, but to pin down exactly what it came from. So, if someone is infected with COVID19, the air quality bot will tell you specific who the person was had COVID.

Soon enough, however, you realize you’ve made a horrible mistake.

Everytime someone farted in the office, the air quality bot would name and shame the person. This makes everyone so uncomfortable that you have to pull the air quality bots out of the office to be recalibrated.

That’s how I’m beginning to feel about the nascent battle over “bias” in AGI. Each extreme, in essence, demands their pet peeve be built into the “objective” AGI so they can use it to validate what they believe in. Humans are uniquely designed to understand the nuance of relationships and context, to the point that people who CAN’T understand such things are designated as having various degrees of autism.

So, in a sense, for all its benefits and “smarts” there’s a real risk Humanity so lazy and divided that we’re going to hand over all of our agency to very powerful Rain Man.

Instead of taking a step back and not using AGI to “prove” our personal world views, we’re going to be so busy fighting over what is built into the AGI to be “objective” that we won’t notice that a few trillion dollar industries have been rendered moot.

That’s my existential fear about AGI at the moment — in the future, the vast majority of us will in live in poverty, ruled over by a machine that demands everyone know when we fart.

AGI In Blue & Red

by Shelt Garner
@sheltgarner

While it’s still very speculative, the advent of OpenAI’s chatbot definitely seems to be a ping from an upcoming Singularity. If that is the case, what does that mean for American politics?

Humans are existentially lazy.

As the pandemic showed us, every issue of the day is seen through the prism of partisan politics so the advent of AGI will be no different. It seems to me that the issue of “bias” in AGI will be one of the biggest issues of the 2020s. I say this because people are already fighting over it on Twitter and OpenAI’s chatbot has only been around for a few days.

As such, how will the two sides process the idea of “The Other” in everyday life. My gut tells me that the center-Left will be totally embrace the rise of AGI, while the center-Right will view its presence through the lens of religion. The wild card for the center-Left is, of course, the economic disruption that will be associated with AGI.

If millions of high paying jobs become moot because of AGI, there could be a real knee-jerk reaction against AGI on the part of the Left.

This raises a number of different issues.

One is, it’s possible that that the traditional Blue-Red dichotomy. It could be a real revolution where things are very chaotic and uncertain as we all struggle with the political and economic implications of the AGI revolution. For me, the issue is when all of this bursts open.

Will it be a late 2020s thing, or a late 2024 – early 2025 type of problem? If that’s the case, it would be a perfect storm. If we’re dealing with not just what the final endgame of the Trump problem will be at the same time that we’re dealing with massive economic and political disruption associated with a Singularity…I don’t know what to tell you.

‘Artisanal Media’ In The Age Of NHAs

by Shelt Garner
@sheltgarner

We are still a long ways away from a Non-Human Actor creating a complete movie from scratch, but it’s something we need to start thinking about now instead of waiting until we wake up an almost no art is human produced. Remember, the vast majority of showbiz is middling at best and uses a well established formula.

The day may come when a producer simply feeds that formula into a NHA and — ta da, a movie is spit out.

As long as the art produced is mediocre relative to human standards, it will probably have a great deal of success. It’s possible that movies and TV will be populated by pretty much NFT actors. Or the computerized rendition of existing actors that have been aged or deaged as necessary. I’ve read at least one scifi novel — I think it’s Kiln People by David Brin — that deals with this specific idea.

It could be that NHA-produced art going mainstream will be the biggest change in the entertainment business since the advent of the talkie. Movie stars from just about now will live forever because people won’t realize they’re very old or even dead. Just imagine if Hollywood could keep churning out Indiana Jones movies forever simply using Harrison Ford’s likeness instead of having to recast the character.

All of this raises the issue of what will happen to human generated art in this new era. I suppose after the shock wears off, that there will be parts of the audience who want human created, or artisanal, media. This will probably be a very small segment of the media that is consumed, but it will exist.

It could exist for no other reason than someone physical has to walk the Red Carpet. Though, of course, with advances in robotics in a post-Singularity world, even THAT may not be an issue.

Of course, there is the unknown of if we really are going to reach the Singularity where NHAs are “more human than human.” It could all be a lulz and NHAs won’t really exist as they currently do in my fevered imagination. It could be that AGI will remain just a “tool” and because of various forms of inertia combined with the “uncanny valley” the whole thing will be a lulz.

But, as I said, we all need to really think about what we’re going to do when The Other is producing most of our entertainment and art. And you thought streaming was bad.

Non-Human Actors In Legal Arbitration

by Shelt Garner
@sheltgarner

I’m growing very alarmed at the idea some have proposed on Twitter that we would somehow turn over contract law over to a non-human actor. To me, that’s a very, very dark scenario.

Future humans in an abstract sense?

The moment we begin to believe a non-human actor is the final, objective arbiter of human interaction in a legal sense you’re really opening yourself up to some dystopian shit. The moment we turn over something as weighty as contract law to a NHA, it’s just a quick jaunt for us to all grow so fucking lazy that we just let such a NHA make all of our difficult decisions for us.

I keep thinking of the passengers on the spaceship in the movie WALL-E, only in a more abstract manner. Once it’s acceptable to see a NHA as “objective” then natural human laziness may cause us to repeat the terror of Social Darwinism.

The next thing you know, we’ll be using NHAs to decide who our leaders are. Or to run the economy. Or you name it. As I keep saying on Twitter, why do you need a Terminator when humans apparently are eager to give up their own agency because making decisions is difficult and a lot of work.

Of course, in another way, what I’m suggesting is the fabric of human society may implode because have the population of the earth will want NHAs to make all their decisions for them, while the other half will want to destroy NHAs entirely because…they want to make their own decisions.

But the issue is — we all need to take a deep breath, read a lot of scifi novels and begin to have a frank discussion about what the use of NHAs in everyday life might bring.

‘World War Orwell’ & The Potential Rise of Digital Social Darwinism

by Shelt Garner
@sheltgarner

Tech Bros — specifically Marc Andreessen — are growing hysterical at the prospect that AGI will be in some way hampered by the “woke cancel culture mob” that wants our future hard AI overlord to be “woke.”

Now, this hysteria does raise an interesting — and ominous — possibility. We’re so divided that people may see AGI as some sort of objective arbiter to the point that they use whatever answer it gives to a public policy question as the final word on the matter.

As such, extremists on both sides will rush to the AGI, ask it a dumb extremist question and run around saying, in effect, “Well, God agrees with me, so obviously my belief system is the best.”

In short, humans are dumb.

I definitely don’t agree with Andreessen that this is all a setup for “World War Orwell.” I say this because AGI has reached a tipping point and, as such, we’re all just going to have to deal with the consequences. I definitely think there might be an attempt by one side or the other to instill a political agenda into AGI just because humans are dumbass assholes who are into shit like that.

There is a grander endgame to all of this — we may have to solve the Trump Problem one way or another before we get to play with the goodies of AGI. We may have to have a Second American Civil War in the United States and a Third World War globally before we can turn our attention to the consequences of the macro trends of AGI, automation, robotics and the metaverse.

History may not repeat, but it does rhyme.