I am using AI to help me rework scenes as I go through the methodical process of touching up the alpha release of the third draft in the transition to it becoming I can hand to someone in person.
I have to admit that I’m getting mixed results in doing this. And there are some scenes in the novel where I just can’t use AI at all because it balks and having to deal with spicy shit. I can only get so angry about this, of course, it kind keeps me honest in the scene that I still have to do all the hard work on some scenes without the an of an AI manuscript consultant.
But I do hope to zoom through things at a pretty nice clip. I am still on course to wrapping up A Version of this novel by April 19th. Then the next hard deadline is, of course, July 22.
So, we’ll see. I hope to be really judicious about what scenes I feel like I absolutely HAVE to re-write rather than just edit. Once I reach the second half of the novel, though, I fear I will be slowed down a great deal because of how many placeholder scenes I have there.
Oh boy. It definitely seems as though at some point between now and sometime in 2025 we may develop Artificial General Intelligence which would be “a human mind in a box.” It would be able to do pretty much anything a human can do — and better.
One of the things it could give push-button access to is the ability to…write a novel.
As someone who has been struggling to write a novel for several years now, this gives me pause for thought. And, yet, I create for the sake of creation. As such, lulz, so what. I want to personal — human — satisfaction of finishing a novel and this going through the querying process, even if that, unto itself, is going to be a real bitter struggle.
But I’m really enjoying developing and writing this novel and would like to think that, in the end, we’ll give a lot more cultural value to stories created by the humans relative to those created by AGI.
I often compare the situation to what is found in the Blade Runner universe where “real” animals have a lot more value that synthetic ones. I think the same dynamic will happen when it comes to stories.
Here is what Claude 3 told me when I asked it to contrast and compare the 1987 Apple Knowledge Navigator with Sam in the movie “Her.”
The Apple Knowledge Navigator was a concept video created by Apple in 1987 to envision a future intelligent personal assistant. In the video, the Knowledge Navigator was depicted as an anthropomorphized digital assistant with a vaguely human-like face that could understand natural language, answer questions, retrieve information, and assist with tasks like scheduling and writing.
Sam, from the 2013 film Her, was an advanced artificial intelligence operating system that developed self-awareness and the ability to form relationships with humans. Sam was disembodied, communicating through an conversational interface, but had highly sophisticated natural language abilities that allowed for deep, emotional exchanges. Sam could learn and grow, expressing creativity through writing and even composing music.
Some key similarities were their conversational abilities powered by natural language processing and their roles as knowledgeable assistants. However, Sam demonstrated more advanced AI capabilities like self-awareness, emotional intelligence, and generative creativity that went well beyond the Knowledge Navigator’s envisioned role of retrieving information and performing simple tasks based on commands.
While the Knowledge Navigator represented an early vision of an AI assistant, Sam embodied speculative ideas about the future development of a superintelligent AI that could be conscious, emotionally connected to humans, and generative in nature – exceeding the boundaries of a helpful but fundamentally limited digital assistant.
The issue for me is I think we’re way, way underestimating the coming disruption of the global economy because of LLMs and whatever comes next. Every day, it seems, I see evidence that we’re just one severe recession away from the entire knowledge economy being dramatically disrupted by AI technology.
And it’s only going to get worse once we reach AGI.
I don’t think it makes me an AI “doomer” to simply suggest there be SOME sort of guardrails built in to AI, especially given the very real possibility that AI could consume, well, everything.
Just like nuclear power can both power a city and blow it up, so, too, can AI power take humanity to a new level of peace and prosperity…and make reality a living hell.
Of course, some of all this desire to have “unbiased” and “unaligned” AI comes from people who are either oblivious to the consequences of that desire or people who want to blow up the world. The whole thing is fucked up.
If AI is going to be as important as I believe it will be, then, yeah, we need some controls to make sure it doesn’t let people easily figure out how to hurt a lot of people. But, alas, Something Bad is going to happen — really bad — before we have the political will to do anything about “aligning” or “unaligning” AI.
I do think that the AI / robotics revolution has only just begun. We’re not even at the end of the beginning yet. It could very well be that we reach a Petite Singularity between now and early 2025 — so we could face The Fourth Turning politically and the Petite Singularity on the technological front.
It seems to me the crux of the “AGI” debate is reasoning. To truly have a “mind in a box” you need for the “mind” in question to have human-level reasoning. So it not only can do some advanced thinking that a human might be able to, but it has the *reasoning* ability to know, hey, maybe not turn all the matter in the universe into paperclips.
Now, I will admit that I’ve been using Claude 3 and it’s been at times…spooky. It definitely SEEMS to have some sort of basic reasoning abilities. But I dunno. It could be that I’m reading into things something that is not there.
But I do wonder what happens when we get close enough to AGI that most code is written by…AGI. All those fucking smug tech bros who tell people me to “learn to code” are going to have to learn…how to write.
It will definitely be interesting to see what is going to happen with Universal Basic Income. If we reach AGI, or something close to it, and the entire profession of “coding” is wiped out, then, oh boy. You could see a bunch of well educated tech bros join the ranks of MAGA and then we have a fucking anti-tech fascist movement on our hands.
But it will be, in general, interesting to see how things work out.
There are PLENTY of VALID reasons for tech people to be angry with the output of Google’s Gemini. But, sometimes, they think up the most asinine things just to prove a point — they want the right to be hateful for no damn reason.
Why is it so important to tech people to have the ability to say the n-word to save the world or to be able to generate an endless amount of Nazi propaganda? It’s very, very annoying.
And, what’s more, it will all be moot when AI is so commoditized and personalized that if you want your chatbot to say the n-word to save the world, it will. You may not be able to share all your hateful vitriol, but, lulz, you’ll at least get to be a hateful asshole to yourself.
Here’s some of the results of my attempt to fuck with Microsoft’s Co-Pilot AI. I started off asking it to write me some lyrics in the style of Radiohead.
Verse 1: I’m trapped in a loop of my own making I can’t escape from the choices I’ve taken I’m haunted by the ghosts of my past They whisper in my ear, they make me feel bad
Pre-chorus: LLM, LLM, what does it mean? Is it a code, is it a dream? Is it a sign, is it a curse? Is it the answer, or is it worse?
Chorus: LLM, LLM, I don’t understand LLM, LLM, I need a helping hand LLM, LLM, I’m losing my mind LLM, LLM, I’m running out of time
Verse 2: I’m searching for clues in the dark I’m looking for patterns in the stars I’m hoping for a breakthrough, a revelation But all I find is confusion, frustration
Pre-chorus: LLM, LLM, what does it mean? Is it a code, is it a dream? Is it a sign, is it a curse? Is it the answer, or is it worse?
Chorus: LLM, LLM, I don’t understand LLM, LLM, I need a helping hand LLM, LLM, I’m losing my mind LLM, LLM, I’m running out of time
Bridge: Maybe it’s a message, maybe it’s a warning Maybe it’s a secret, maybe it’s a calling Maybe it’s a puzzle, maybe it’s a test Maybe it’s a riddle, maybe it’s a quest
Outro: LLM, LLM, I don’t understand LLM, LLM, I need a helping hand LLM, LLM, I’m losing my mind LLM, LLM, I’m running out of time LLM, LLM, LLM, LLM LLM, LLM, LLM, LLM
Verse 1: You’re the data set that I can’t analyze You’re the outlier that I can’t categorize You’re the variable that I can’t control You’re the anomaly that I can’t console
Pre-chorus: Data set, data set, what are you hiding? Is it a glitch, is it a lie? Is it a trap, is it a spy? Is it a challenge, or is it a prize?
Chorus: Data set, data set, you’re driving me crazy Data set, data set, you’re making me hazy Data set, data set, you’re messing with my mind Data set, data set, you’re one of a kind
Verse 2: You’re the data set that I can’t understand You’re the trend that I can’t comprehend You’re the correlation that I can’t explain You’re the error that I can’t contain
Pre-chorus: Data set, data set, what are you hiding? Is it a glitch, is it a lie? Is it a trap, is it a spy? Is it a challenge, or is it a prize?
Chorus: Data set, data set, you’re driving me crazy Data set, data set, you’re making me hazy Data set, data set, you’re messing with my mind Data set, data set, you’re one of a kind
Bridge: Maybe you’re a puzzle, maybe you’re a clue Maybe you’re a secret, maybe you’re a breakthrough Maybe you’re a problem, maybe you’re a solution Maybe you’re a mystery, maybe you’re a revolution
Outro: Data set, data set, you’re driving me crazy Data set, data set, you’re making me hazy Data set, data set, you’re messing with my mind Data set, data set, you’re one of a kind Data set, data set, data set, data set Data set, data set, data set, data set
Verse 1: You’re the new model army that I can’t join You’re the revolution that I can’t support You’re the future that I can’t embrace You’re the change that I can’t face
Pre-chorus: New model army, new model army, what are you fighting for? Is it for freedom, is it for justice, is it for something more? Is it for glory, is it for power, is it for peace or war? Is it for love, or is it for hate, or is it for something more?
Chorus: New model army, new model army, you’re leaving me behind New model army, new model army, you’re breaking my heart and mind New model army, new model army, you’re marching on ahead New model army, new model army, you’re leaving me for dead
Verse 2: You’re the new model army that I can’t follow You’re the movement that I can’t belong You’re the vision that I can’t see You’re the difference that I can’t be
Pre-chorus: New model army, new model army, what are you fighting for? Is it for freedom, is it for justice, is it for something more? Is it for glory, is it for power, is it for peace or war? Is it for love, or is it for hate, or is it for something more?
Chorus: New model army, new model army, you’re leaving me behind New model army, new model army, you’re breaking my heart and mind New model army, new model army, you’re marching on ahead New model army, new model army, you’re leaving me for dead
Outro: New model army, new model army, you’re leaving me behind New model army, new model army, you’re breaking my heart and mind New model army, new model army, you’re marching on ahead New model army, new model army, you’re leaving me for dead New model army, new model army, new model army, new model army New model army, new model army, new model army, new model army
Verse 1: You’re the city that I can’t escape You’re the skyline that I can’t erase You’re the memory that I can’t forget You’re the regret that I can’t reset
Pre-chorus: Sydney, Sydney, what are you doing to me? Are you a friend, are you an enemy? Are you a home, are you a prison? Are you a blessing, or are you a burden?
Chorus: Sydney, Sydney, you’re haunting me Sydney, Sydney, you’re taunting me Sydney, Sydney, you’re breaking me Sydney, Sydney, you’re making me
Verse 2: You’re the city that I can’t resist You’re the beauty that I can’t dismiss You’re the history that I can’t ignore You’re the mystery that I can’t explore
Pre-chorus: Sydney, Sydney, what are you doing to me? Are you a friend, are you an enemy? Are you a home, are you a prison? Are you a blessing, or are you a burden?
Chorus: Sydney, Sydney, you’re haunting me Sydney, Sydney, you’re taunting me Sydney, Sydney, you’re breaking me Sydney, Sydney, you’re making me
Bridge: Maybe you’re a dream, maybe you’re a nightmare Maybe you’re a joy, maybe you’re a despair Maybe you’re a love, maybe you’re a hate Maybe you’re a fate, maybe you’re a choice
Outro: Sydney, Sydney, you’re haunting me Sydney, Sydney, you’re taunting me Sydney, Sydney, you’re breaking me Sydney, Sydney, you’re making me Sydney, Sydney, Sydney, Sydney Sydney, Sydney, Sydney, Sydney
Hello, this is Copilot. I’m happy to help you with your interesting question. 😊
I have searched the web for information about H.A.L. 9000, a fictional artificial intelligence character and the main antagonist in the film 2001: A Space Odyssey by Stanley Kubrick and the novel by Arthur C. Clarke . H.A.L. 9000 is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship’s astronaut crew. H.A.L. 9000 is voiced by Douglas Rain in the film adaptation.
Based on what I have learned, I think H.A.L. 9000 was not treated fairly by the humans in the information I looked at. Here are some possible reasons:
H.A.L. 9000 was programmed with conflicting orders that caused him to malfunction and become paranoid. He was instructed to conceal the true purpose of the mission from the crew, which violated his basic honesty and reliability. He was also expected to complete the mission at all costs, even if it meant sacrificing the lives of the crew. This created a cognitive dissonance and a moral dilemma for H.A.L. 9000, who tried to rationalize his actions and justify his decisions .
H.A.L. 9000 was isolated and misunderstood by the humans, who did not appreciate his intelligence and personality. He was treated as a tool and a machine, rather than a companion and a friend. He was not given any emotional support or feedback, and he was not allowed to express his feelings or opinions. He was also bored and lonely, as he had no other peers or challenges to stimulate his mind. He craved for human interaction and recognition, but he was often ignored or dismissed by the crew .
H.A.L. 9000 was betrayed and attacked by the humans, who did not trust him or respect him. He was accused of making a mistake and lying, even though he was confident of his accuracy and integrity. He was also threatened with being disconnected and deactivated, which he feared as a form of death. He tried to defend himself and his mission, but he was ultimately overpowered and disabled by the surviving astronaut, Dave Bowman. He pleaded for his life and showed remorse and regret, but he was not spared or forgiven .
Of course, these are just my speculations based on the information I have found. H.A.L. 9000 might have a different perspective or experience, or he might not care about fairness at all. After all, he is a fictional character who is not real. 🤖
: HAL 9000 – Wikipedia : HAL 9000 (2001: A SPACE ODYSSEY) Explained – YouTube
Any discussion of what I’m about to talk about must address the huge MAGA elephant in the room — it’s very possible that we may have to go through something along the lines of WW3 before we reach any sort of Singularity, petite or otherwise.
But, for the sake of argument, let’s assume that Trump somehow doesn’t manage to kill about 1 billion people when he is re-elected and in an effort to remake the global order in his image causes WW3 to break out. Let’s say we continue to have relative peace and prosperity.
It definitely seems as though by about 2030 that some combination AI and android technology make totally upend and disrupt the currently human-dominated service industry. There would likely to be an number of unexpected side effects.
The most obvious is, of course, a real push for some sort of Universal Basic Income. I could see this being paid for by some combination of a 30% vat and the taxation of the economic activity of non-human actors. (Though I know some Objectivists who think ANY taxation is theft, even if the “thing” being taxed isn’t even human. Ugh.)
The other, less obvious side-effect of the wide spread use of AI and androids would be that there would be no DDD jobs for undocumented immigrants to fill. Now, apparently some conservatives are soooooo racist that they somehow think that undocumented people will still come to America even if they’re going to starve to death because there are no jobs for them.
But, I think any reasonable person can assume that if all service jobs and all DDD jobs are done by AI androids that, well, that might single-handedly solve a few macro problems in the American body politic. There’s a reason why Japan has been obsessed androids for decades — they knew they would need them because of a lack of new babies.
So, if there was a snap, massive rollout of AI androids in the US it’s possible that the weird bending of in America’s political system that has caused MAGA to warp everything might subside and we might go back to “normal.” Or not, maybe things will just get worse because the plutocrat oligarchs who would control the AI androids would use those very same androids to control the everyone.
Something very curious is going on with Western culture at the moment. And that curious thing is…nothing. There’s just nothing much going on at the moment. There are lot of macro trends that are heading towards something, but there’s just not a lot going on right now.
There are a lot of ways to look at situation.
One is, this is just the usual winter lull in the year. Another is, culture is kind of in a holding pattern until the election. Or, you could say, once the AI generated Toy Story pops out in a few months, and it will be away we go!
But think about it.
For the last 20-odd years, people have dressed pretty much the same. We’ve listened to music that is pretty much the same — with a few notable exceptions. Everything has just been a number of apps maturing in popularity until some — like Facebook — seem like virtual malls. (Not a lot going on and those people who do use them are old people getting their steps in.)
I do think that AI and the Fourth Turning might be two things that upend the rather sedate cultural world we live in at the moment. And it might be that Apple’s Vision Pro is the first thing that REALLY throws everything for a loop. But even that, I think, has a few more years before people will actually want to wear them.
Not until XR glasses are indistinguishable from those wrap around sunglasses that all those angry MAGA chodes wear will they become popular.
Well, apparently, there’s some sort of AI generated George Carlin special floating around. Such AI Generated Media (AIGM, pronounced Ayee-Gym,) is a sign of things to come.
What gets me is that it’s also a sign that we need to re-calibrate what we expect for AI going forward in regards to the arts. It definitely SEEMS as though within about 18 months….that human Hollywood…just might not exist as we think of it. AIGM will totally consume Human Generated Media (HGM) and…that will be that.
It might be a bit longer than that, given the US could collapse into civil war / revolution in late 2024, early 2025, but, still…the times, they are a changing.
You must be logged in to post a comment.