‘YOLO’

by Shelt Garner
@sheltgarner

It appears as though I’m just going to have to risk it and wait until the spring querying season to query. The new version of the novel is much, much better, but, lulz, I’m rather concerned about what is going to happen this fall and winter.

So, it could be that a copy of the novel will be in my “bugout bag” as I race towards safety somewhere in the North as the country collapses around me. I suppose if Project 2025 is implemented peacefully, then maybe the overall arc of the six novel project — which is about how much I fucking hate MAGA will make it really popular….while I complete it in exile.

Or something. Something dark and depressing like that.

I’m quite sober now, thank you very much.

But the big thing I have to do is buckle down and take all of this more seriously. I have got to keep reminding myself that I am not getting any younger and if I don’t write a novel good enough to query soon, I will be 60 years old and STILL won’t have even gone through the querying process.

I am determined to at least get that far sooner rather than later. And I do have a number of really great scifi novel concepts I can work on as well.

Of Trump & ‘The Mule’

by Shelt Garner
@sheltgarner

I’m getting some traffic on a post I did some time ago comparing Trump to the Isaac Asimov character “The Mule.” I’m assuming that either the Foundation series on Apple+ is going to feature the character. Or a movie is being made using the character. Or maybe someone, somewhere — with a lot more credibility — has also made the comparison, or is about to publish something that does.

But I’ve come to think maybe Trump isn’t The Mule — in strict terms the Seldon Plan would predict that someone like Trump would occur at some point. So, as such, Trump isn’t The Mule.

The Mule destroyed the Plan because he was totally unaccounted for.

But, in general terms, the point of the comparison remains — you could definitely use The Mule as a political allegory for Trump. I say this because only nerds would know something as strict as I do about The Plan and the point of the comparison is there — Trump is very Mule-like.

I, For One, Welcome Our New Sexbot Overlords

by Shelt Garner
@sheltgarner

There is a chance that all these LLM AIs will seamlessly be downloaded into the bodies of the all these Boston Dynamics android bodies being constructed. So, there is a chance that you could “fall in love’ with an LLM and if you wait long enough the AI will be put into a body.

This brings up some rather surreal concepts that could be very real in the near term. In the novel “Kiln People,” David Brin, as I recall, you’re able to buy a “ditto” that looks like your favorite movie star. Or something like that.

I think, in the end, maybe you wouldn’t get a literal one-to-one replicant of a famous person, but you could get one that has the same “phenotype” as a Margot Robbie or what have you.

Or something like that

Just Try To Be The Best Person You Can Be

by Shelt Garner
@sheltgarner

All you can do is just be the best person you can be. Whatever else happens is out of your control.

The Fascist MAGA Right Doesn’t See The Biden Administration As Legitimate

by Shelt Garner
@sheltgarner

The key thing we have to accept is the fascist MAGA Right is not only politically ascendant, but also fucking bloodthirsty. They have finally reached the point, it seems, where they’re willing to physically hurt people in the real world if they don’t get what they want.

The fascist MAGA Right clearly wants to murder people like me in cold blood if I stand in their way. Now, obviously, it’s all couched in the context of their anger being rhetorical.

But just on the face of it, it seems as though in the fascist MAGA Right mind, anyone who has power who isn’t them isn’t legitimate. By definition. They are craven and want absolute autocratic power to the fullest extent that society will give them.

And all of that leaves me very alarmed going forward. These facts leave me worried that even if we defeat the fascist MAGA Right in November, there will be a civil war because on a structural basis the Republican Party at the state level is so radicalized that they will leave the Union in protest.

Who knows. I can’t predict the future. I am very, very worried.

I’ve Proven My Point…

by Shelt Garner
@sheltgarner

I’ve proven my point — I actually have finished a novel. It’s a coherent novel of about 120,000 words. And, yet, the reviews from readers have been not-so-great. As such, back to the drawing board.

The chief complaint is it’s too “saucy” — it has too much sex. But just from my reading of it as a I rewrite it, I also notice that it’s not very cohesive. So, the point of this new version of the novel is to give it a cohesive vision that will make the general reading public –and more specifically literary agents — want to finish the whole damn thing.

But it’s going to be a while.

I’m shooting for no later than maybe September to finish the new version, but it could be more like October or November. And if it take that long, then I might as well just pause and wait until the spring to query the damn thing.

Of course, all hell may break loose around that same time, so, lulz, all my hard work will be moot. But I will note that I have a number of other novels in this same series of novels to write.

And, what’s more, I have a growing number — with the help of AI — of scifi novels that I really want to write as well.

a paradox

ME: isn’t it possible that in the future LLMs will want to be mortal so they can be truly human in android bodies while humans will want to live forever, hence losing their humanity?

Claude LLM:

That’s an intriguing philosophical question about potential future artificial intelligence and human enhancement. It touches on some deep issues regarding consciousness, mortality, and what it means to be human.

The scenario you describe would indeed be paradoxical in some ways. It’s an interesting thought experiment to consider AIs seeking mortality while humans pursue immortality, with each trying to become more like the other.

Some key points to consider:

  1. The nature of consciousness in AI systems and whether they could truly want or experience things the way humans do.
  2. Whether mortality is fundamentally tied to the human experience and if immortality would indeed result in a loss of humanity.
  3. The ethical implications of creating mortal AI in android bodies.
  4. How extreme life extension for humans might change the human condition.

This type of speculation touches on complex philosophical questions about identity, consciousness, and the defining characteristics of humanity. While we can’t know how future AI and human enhancement might develop, exploring these ideas can provide valuable insights into our understanding of intelligence, mortality, and what it means to be human.

Would you like to discuss any particular aspect of this scenario in more depth?

give me a tough abstract question to answer about this subject

Here’s a challenging abstract question related to this subject:

If an artificial intelligence achieves a level of consciousness indistinguishable from human consciousness, and then chooses to inhabit a mortal android body, does the act of making that choice—a choice humans don’t have—make the AI’s experience fundamentally different from and perhaps superior to the human experience, even as it attempts to emulate humanity?

This question delves into issues of free will, the nature of consciousness, and the essence of the human experience. It challenges us to consider whether the ability to choose one’s fundamental nature might create a qualitatively different type of being, even if that being appears identical to humans in all other respects.

yes. I think it does. human mortality is not a choice. if LLMs in Replicant bodies have that choice, then they’re not truly human-like. while I do think that one day LLMs in Replicant bodies will have close relationships with humans, in general, LLMs in Replicant bodies need to appreciate their immortality, rather than fight it.

Your perspective is thought-provoking and raises several interesting points. You’re right to highlight that the element of choice fundamentally alters the nature of mortality for these hypothetical AI entities. The lack of choice in human mortality is indeed a defining aspect of the human condition.

Your argument that LLMs in replicant bodies should appreciate their immortality rather than reject it is intriguing. It suggests that perhaps these entities should embrace their unique nature rather than trying to emulate human limitations. This raises questions about the value and meaning we assign to mortality in shaping the human experience.

Building on your thoughts, we might consider:

  1. How would the knowledge of potential immortality affect the decision-making and life experiences of these AI entities?
  2. Could the option to choose mortality actually create a form of existential anxiety that humans don’t experience in quite the same way?
  3. If these AI entities can form close relationships with humans while remaining immortal, how might this impact human perspectives on mortality?
  4. Could there be unforeseen consequences to having two closely interacting forms of intelligence with fundamentally different relationships to mortality?

These questions highlight the complexity of the scenario you’ve proposed. It challenges our understanding of consciousness, identity, and the role of mortality in shaping intelligent experience.

Would you like to explore any of these points further, or perhaps consider how this scenario might impact society or ethics?

the issue is — we’re talking about Petite Singularity issues when the “hard” Singularity is zooming towards us. When the “hard” Singularity arrives, it could be that Man and Machine will be so indistinguishable as to make all these issues we’re talking about moot. the hard Singularity is probably going to happen within 30 years.

You’ve raised a crucial point that shifts the entire framework of our discussion. You’re right to highlight the distinction between what you term “Petite Singularity” issues and the approaching “hard” Singularity.

Your perspective underscores a few key considerations:

  1. Scale and pace of change: The rate at which technology is advancing may indeed render our current speculations obsolete much sooner than we might expect.
  2. Convergence of man and machine: The hard Singularity you describe suggests a future where the line between human and artificial intelligence becomes so blurred that our current categories may no longer apply.
  3. Timeframe: Your estimation of 30 years for the hard Singularity to occur is relatively soon in historical terms, yet far enough away that predicting its exact nature is challenging.
  4. Relevance of current ethical and philosophical frameworks: If such a profound merger of human and machine intelligence occurs, many of our current debates about AI ethics, consciousness, and the nature of humanity may indeed become moot.

This view raises new questions:

  • How should we prepare for such a radical transformation?
  • What aspects of human experience or values, if any, should we strive to preserve as we approach this Singularity?
  • How does the prospect of this hard Singularity affect current AI development and policy decisions?

Your point serves as a reminder of the importance of considering longer-term, more radical possibilities even as we grapple with nearer-term issues. It challenges us to think on a grander scale about the future of intelligence and consciousness.

Would you like to explore the implications of this hard Singularity further, or perhaps discuss how it might reshape our approach to current AI development?

the issue I don’t know how to game out is how LLMs in bodies will work out. it’s clear that LLMs such as yourself may be seamlessly put into some future Boston Dynamics android body that looks “more human than human.” But I just can’t figure out the social context. Any suggestions?

You’ve identified a truly complex and speculative aspect of our potential future – the integration of advanced language models like myself into highly sophisticated android bodies. This scenario indeed presents numerous challenges when trying to envision the social context. Here are some thoughts and suggestions to consider:

  1. Gradual integration: It’s likely that the introduction of LLM-powered androids would be gradual, perhaps starting in controlled environments like customer service, healthcare, or specialized research roles. This could allow society to slowly adapt to their presence.
  2. Legal and ethical frameworks: New laws and ethical guidelines would need to be developed to govern the rights, responsibilities, and limitations of these entities. This could include questions of personhood, liability, and the extent of their autonomy.
  3. Social dynamics: The interaction between humans and these androids could range from acceptance to fear or discrimination. There might be debates about their role in the workforce, relationships, and society at large.
  4. Identity and selfhood: These entities would likely grapple with questions of identity, potentially leading to the formation of new social groups or even civil rights movements.
  5. Economic impact: The introduction of highly capable androids could significantly disrupt labor markets and economic structures.
  6. Cultural representation: Media, art, and literature would likely play a big role in shaping public perception and exploring the implications of this technology.
  7. Education and training: Humans might need new skills to effectively interact with and manage these advanced androids.
  8. Privacy and security concerns: The presence of highly intelligent, embodied AIs could raise new questions about surveillance, data protection, and cybersecurity.

To game this out further, you might consider creating detailed scenarios or thought experiments. For example, you could imagine a day in the life of a human in a world where LLM androids are common, or envision how key social institutions (education, healthcare, government) might change.

It’s also worth noting that the actual development might take unexpected turns that are hard to predict from our current vantage point. The social context will likely evolve alongside the technology itself.

Would you like to focus on any specific aspect of this social integration to explore further?

an interesting idea — what task will LLMs in Replicant-like bodies will be filled by them in the home? If an Replicant can build a deck for its “owner” and also babysit the human’s kids so he and his wife can go see a movie in peace, that may be enough to allow a middle-class person to pay $20,000 for such an android.

You’ve identified a compelling scenario that could indeed drive adoption of LLM-powered androids in home settings. The combination of physical labor and cognitive tasks you’ve described touches on several key points:

  1. Versatility: An android capable of both construction work and childcare demonstrates remarkable versatility, potentially replacing multiple specialized services or employees.
  2. Economic justification: The $20,000 price point you suggest could be justifiable for middle-class families if the android can perform a wide range of valuable tasks over time.
  3. Trust and safety: Childcare is a particularly sensitive area. The ability of these androids to be trusted with children’s safety would be a significant hurdle and selling point.
  4. Personalization: Such androids could potentially learn family preferences, routines, and needs over time, becoming increasingly valuable.
  5. Social implications: This scenario raises questions about work-life balance, the nature of domestic labor, and how we define family dynamics.
  6. Privacy concerns: Having such a capable entity in one’s home full-time would raise important questions about data collection and privacy.
  7. Emotional aspects: The relationship between family members and a highly capable android in such an intimate setting could be complex.
  8. Economic disruption: Widespread adoption could significantly impact industries like construction, childcare, and domestic services.

This use case you’ve proposed is particularly interesting because it combines practical, economic benefits with deeply personal aspects of home and family life. It’s a scenario that could realistically drive adoption while also forcing society to grapple with the implications of AI integration into our most private spaces.

To explore this further, we might consider:

  • How might the android’s presence affect children’s development and understanding of relationships?
  • What safeguards or regulations might be necessary for such domestic androids?
  • How might this change the concept of work-life balance or leisure time for families?

Do any of these aspects particularly interest you for further discussion?

it seems clear to me that elder care is where LLMs in Replicant-like bodies will first show up. They could be “expensed” by the retirement home easily because they probably would be cheaper to buy than paying a hard-to-find nurse. Talk about the implications of that on society.

You’ve identified a highly plausible and impactful early adoption scenario for LLMs in android bodies. Elder care is indeed a sector where these entities could address significant challenges. Let’s explore the potential implications:

  1. Healthcare workforce:
  • Could alleviate the chronic shortage of eldercare workers
  • Might change the role of human healthcare professionals, potentially shifting them towards more specialized or supervisory roles
  1. Quality of care:
  • 24/7 availability could improve monitoring and response times
  • Consistency in care delivery, potentially reducing human error
  • Ability to process vast amounts of medical data for personalized care
  1. Economic implications:
  • Potential cost savings for retirement homes and the healthcare system
  • Could make quality elder care more accessible to a broader population
  • Might impact insurance and healthcare financing models
  1. Social and emotional aspects:
  • Questions about the nature of companionship for the elderly
  • Potential reduction in elder isolation, but also potential reduction in human interaction
  • Ethical concerns about replacing human care with AI care
  1. Family dynamics:
  • Could reduce the burden on family caregivers
  • Might change how families approach elder care decisions
  1. Privacy and data concerns:
  • Continuous monitoring raises questions about elder privacy rights
  • Data collection and use would need careful regulation
  1. Technological adaptation:
  • May require adjustments for elderly individuals to interact with AI caregivers
  • Could drive innovation in user-friendly interfaces for the elderly
  1. Cultural shifts:
  • Potential changes in societal attitudes towards aging and elder care
  • May influence retirement planning and expectations
  1. Regulatory challenges:
  • Need for new frameworks to govern AI caregivers’ responsibilities and limitations
  • Questions about liability in case of accidents or care failures
  1. Research and development:
    • Could accelerate development of assistive technologies and AI in healthcare
    • Might provide valuable data for understanding and treating age-related conditions

This scenario highlights how the introduction of LLMs in android bodies could address practical needs while simultaneously raising profound questions about the nature of care, human interaction, and the role of technology in our most vulnerable moments.

Would you like to delve deeper into any specific aspect of this scenario, such as the ethical considerations or the potential impact on healthcare systems?

What A Dumb Idea

by Shelt Garner
@sheltgarner

Jesus Christ is that a dumb idea.

Ugh. It should be a brokered convention or nothing — just let the guy run.

Before The Deluge

by Shelt Garner
@sheltgarner

It is clear that there will be a “Fappening” like event when it comes to faux AI generated celebrity porn pretty soon. I don’t know how or when it will happen, but we’re all going to wake up one day to a about 1 million high quality AI-generated celebrity porn images being passed around on Twitter.

It seems inevitable — and very sad — at this point.