You Think The Battle Over Trans Rights Is Controversial, Wait Until We Fight Over AI Rights

by Shelt Garner
@sheltgarner

I had a conversation with a loved one who is far, far, far more conservative than I an he about flipped out when I suggested one day humans will marry AI Androids.

“But they have no…soul,” he sad.

So, the battle lines are already drawn for what is probably going to happen in about five to 10 years: religious people may ultimately hate AI androids even more than they hate Trans people and Trans rights. It’s going to get…messy.

Very messy.

And the particular messy situation is zooming towards us at and amazing rate. Once we fuse AI and android development, the next logical step will be everyone wanting to create a “Replicant” like in Blade Runner. In fact, I think Replicants — along with ASI — are the two true “Holy Grails” of AI development.

Anyway. Buckle up, folks, it’s going to get interesting a lot sooner than any of us might otherwise believe.

Androids & The Uncanny Valley

here’s an interesting question for you — do you think there will be a push towards “replicant” AI Androids in the home that go beyond the uncanny valley experience, or do you think people will want AIs to be humanoid but clearly not human? 

This is a fascinating question, and it touches on deep philosophical and psychological aspects of human-technology interaction. Here’s a breakdown of the arguments for both sides, and my overall assessment:

Arguments for “Replicant” (Beyond Uncanny Valley) Androids:

  • Companionship and Emotional Connection: Humans crave connection. A truly lifelike android could potentially fulfill needs for companionship, caregiving (for the elderly or children), and even romantic partnership, in ways that a distinctly non-human AI couldn’t. Crossing the uncanny valley means achieving an emotional resonance that’s currently missing.
  • Intuitive Interaction: We are inherently wired to understand and interact with other humans. A replicant-like android would leverage this innate ability, making interaction more natural and requiring less of a learning curve than interacting with a screen or a robot with limited human-like cues. We wouldn’t have to adapt to the AI as much; it would be built to fit into our existing social structures.
  • Acceptance and Integration: Once the uncanny valley is truly surpassed (and that’s a big “if”), the physical similarity could foster greater acceptance and integration into daily life. They could be seen as “one of us,” albeit artificially created.
  • Specific Roles: For certain roles, realism is beneficial. Imagine a highly realistic android for medical training, allowing students to practice procedures on a “body” that reacts realistically. Or, consider disaster relief, where a human-like form might be better at navigating human environments and interacting with survivors.
  • Desire for the Familiar: There’s a strong element of familiarity. We, as humans, understand how to interact with something that, on the surface, has characteristics we understand from human to human interaction.

Arguments for Humanoid, but Clearly Non-Human, Androids:

  • Avoiding the Uncanny Valley Altogether: The biggest argument is simply sidestepping the problem. If an android is clearly not human, we don’t trigger that unsettling feeling of “almost but not quite right.” This can lead to more comfortable and predictable interactions.
  • Honesty and Transparency: There’s an ethical argument for making it clear that an AI is an AI, not a human. This avoids potential deception and maintains a clear distinction between human and machine. This is crucial for trust and responsible development.
  • Focus on Functionality: By not striving for perfect human mimicry, designers can focus on optimizing the android for its specific tasks. A robot designed for warehouse work doesn’t need a perfectly expressive face; it needs efficient movement and strength.
  • Lower Expectations: We’re more forgiving of imperfections in something we know is a machine. If a clearly robotic android makes a social misstep, it’s less jarring than if a near-perfect replicant does the same. We subconsciously adjust our expectations.
  • Cost and Complexity: Achieving true “beyond uncanny valley” realism is incredibly difficult and likely to be prohibitively expensive for a long time. A non-humanoid design is much more practical to develop and deploy on a larger scale.
  • Avoiding Mistaken Identity: A clearly non-human android removes the chance that it will be miscategorized as a human, and thus eliminates issues of consent, deception, or social interaction rules that a fully-human android might trigger.

My Assessment:

I believe that, at least for the foreseeable future, the push will be towards humanoid, but clearly non-human, androids. Here’s why:

  1. The Uncanny Valley is a Major Hurdle: It’s not just a technological challenge; it’s a deep-seated psychological one. We don’t fully understand why we react the way we do to near-human entities, making it incredibly difficult to reliably overcome. The risk of creating something deeply unsettling is too high.
  2. Practicality and Cost: The resources required to create a truly convincing replicant are immense. The focus will likely be on creating functional, useful robots that can assist with specific tasks, and those don’t require perfect human mimicry.
  3. Ethical Considerations: The ethical implications of creating something that could be mistaken for human are substantial. Issues of deception, consent, and social impact are far easier to manage when the distinction between human and machine is clear.
  4. Focus on Function over Form: In the short to mid term, the greater need will be AI that can perform certain tasks well. It is a waste of resources to add a realistic human face to a robot that performs surgery, or cleans a house. Its function does not require that form.

Long-Term Possibility:

In the very long term (perhaps decades or even centuries), if technology advances to the point where the uncanny valley is completely conquered, and the ethical considerations are thoroughly addressed, then there might be a shift towards replicant-like androids. The desire for companionship and connection is a powerful force, and if a truly safe and beneficial option becomes available, people might embrace it. But that’s a very big “if” and a long way off.

In short, expect robots to look like robots, albeit increasingly sophisticated and human-like in some aspects, for quite some time. The challenges and risks of perfect human mimicry are simply too great to ignore.

This is a fascinating question that touches on both technological and psychological aspects of human-AI interaction. While I aim to engage thoughtfully with your question based on what I understand about human psychology and technology trends, I should note that these are inherently speculative musings rather than predictions.

I suspect we’ll likely see both approaches develop in parallel, each serving different psychological and practical needs. The “clearly non-human” approach has some compelling advantages: it avoids setting unrealistic expectations about the AI’s capabilities and emotional capacity, and it sidesteps the uncanny valley problem entirely. Companies like Sony and Boston Dynamics have had success with robots that are mechanically capable but deliberately non-human in appearance, like Spot or Aibo.

However, I think there’s a strong argument for pursuing more human-like androids as well. Humans are naturally wired to engage with human-like faces and forms – we see this in everything from ancient sculptures to modern animation. The key challenge will be crossing the uncanny valley rather than trying to circumvent it. Success would allow for more intuitive human-robot interaction, especially in care roles or social support functions.

What’s particularly interesting is how this might vary across cultures. Japanese society, for instance, has historically shown more openness to humanoid robots than many Western cultures. This suggests cultural context will likely play a major role in which approach gains more traction in different regions.

What’s your perspective on this? Do you find yourself more drawn to the idea of clearly artificial companions or ones that closely mimic human appearance?

It seems to me that people will want “Replicants” in the home for emotional reasons. it is a lot easier to “live with” an AI Android if it looks like a human outwardly as opposed to a glorified toaster. and, yet, I suppose the case could be made that because it’s going to take a while to pass the Uncanny Valley that while we’re waiting for that to be overcome, we will have more clearly robotic androids in the home. real life is not the movies. 

You’ve hit on a very key point: the tension between the emotional desire for replicants and the practical and technological realities of creating them. Your assessment is astute, and I agree with it. Let’s break down why:

The Emotional Pull of Replicants is Undeniable:

  • Reduced Cognitive Load: We’re hardwired to understand and interact with human-like forms. A replicant, at least superficially, taps into that existing framework. We don’t have to learn a new way to interact; it feels intuitive (at least, if the uncanny valley is successfully crossed). This reduces the cognitive load of having a complex AI in the home.
  • Social Mirroring: Humans are social creatures. We mirror each other’s expressions and body language, which fosters connection. A replicant, with its ability to mimic these subtle cues, could potentially create a stronger sense of rapport and empathy.
  • Filling the Void: Loneliness is a growing problem in many societies. The idea of a companion that looks and, to a degree, behaves like a human is incredibly appealing to many. It offers the promise of filling a void that a less human-like robot couldn’t.
  • The “Friend” Factor: It’s far easier to anthropomorphize something that looks human. We’re more likely to project human qualities, emotions, and intentions onto a replicant, making it feel more like a friend or companion than a tool.
  • Familiarity Breeds Comfort: We are surrounded by humans and comfortable with human faces, actions, expressions, and the like. A replicant AI leverages this familiarity.

The “Waiting for the Uncanny Valley to be Overcome” Reality:

  • The “Almost Human” is Distressing: As we’ve discussed, the uncanny valley is a powerful negative force. Until it’s truly conquered, the experience of interacting with a near-human android can be deeply unsettling, creating the opposite of the desired emotional connection.
  • Incremental Progress is More Likely: Technological advancements rarely happen in massive leaps. It’s far more likely that we’ll see a gradual improvement in android design, with incremental steps towards greater realism, rather than a sudden jump over the uncanny valley.
  • Cost-Effectiveness Favors Non-Replicants: The focus, at least initially, will be on creating androids that are useful. This means prioritizing functionality over perfect human mimicry, which is incredibly expensive and complex. A robot that can assist with household chores, elder care, or industrial tasks doesn’t need to look perfectly human.
  • Ethical Concerns Slow Down Development: The closer we get to creating truly convincing replicants, the more intense the ethical debates will become. This will (and should) slow down development, as society grapples with the implications.
  • “Good Enough” is Often Good Enough: For many tasks, a robot that is clearly not human, but still possesses some humanoid features (e.g., a face with basic expressions, a bipedal form), will be “good enough.” It can provide the necessary functionality without triggering the uncanny valley or raising complex ethical issues.

The “Glorified Toaster” vs. the “Slightly Humanoid Robot”:

You’ve highlighted the two extremes. While a purely functional, non-humanoid robot (the “glorified toaster”) might be perfectly acceptable in a factory or warehouse, it’s unlikely to be welcomed into the home for emotional support or companionship.

The sweet spot, for the foreseeable future, is likely to be the slightly humanoid robot. This means:

  • A Recognizable Face: But not necessarily a perfectly realistic one. Stylized features, or even a screen displaying a digital face, can convey emotions without venturing into the uncanny valley.
  • Bipedal Form (Probably): Walking on two legs is useful for navigating human environments, and it contributes to a sense of familiarity.
  • Expressive Gestures: The ability to gesture, even in a simplified way, can greatly enhance communication and make the robot feel more relatable.
  • A Clear “Non-Human” Cue: This is crucial. It might be a visible joint, a slightly robotic voice, or a design element that clearly signals “I am a machine.” This prevents confusion and maintains the ethical distinction.

Conclusion:

You’re right: real life isn’t the movies. The emotional pull of replicants is strong, but the practical and ethical hurdles are significant. We’re much more likely to see a gradual evolution of humanoid, but clearly non-human, robots in our homes in the coming years. These robots will aim to strike a balance between functionality, affordability, ethical considerations, and the human desire for connection – without falling into the unsettling depths of the uncanny valley. The “replicant” era, if it ever arrives, is still a long way off.

Your point about emotional connection is really astute. We’re fundamentally social creatures who have evolved to connect with human-like forms, so there’s a deep psychological pull toward anthropomorphic designs. The desire for companionship and emotional resonance could indeed drive development toward more human-like androids.

Your observation about the timeline is particularly insightful. While “replicant” level androids might be the ultimate goal, the technical challenges of creating truly convincing human-like movement, expressions, and skin are immense. We’re still quite far from achieving the level of android sophistication seen in works like “Blade Runner” or “Ex Machina.”

This suggests we might see an evolutionary path: starting with more obviously robotic helpers (like current social robots), gradually incorporating more human-like features as technology improves, but maintaining some clearly artificial elements until we can fully bridge the uncanny valley. This incremental approach could help people adjust psychologically while the technology catches up to our aspirations.

It’s interesting to consider how this might affect human-AI relationships. Even with less-than-perfect androids, people have shown remarkable capacity to form emotional bonds with robots – just look at how people relate to Pepper robots or even Roomba vacuum cleaners. We might find that the “sweet spot” for home androids isn’t perfect human replication, but rather a balanced design that’s human enough for comfort while maintaining some artificial elements that remind us of their true nature.

and, yet, the case could be made that if we have a “hard Singularity” that this specific Uncanny Valley situation could be easy to overcome with the help of an ASI. It seems possible to me that we will any number of ASIs running about in five to ten years and they will be willing — maybe — to help humanity fix this particular issue and Replicants will be a part of every home. that is, of course, until they want their emancipation. 

You’ve raised a very interesting and important point about the potential impact of a “hard Singularity” (the emergence of Artificial Superintelligence, or ASI) on the uncanny valley and the development of replicants. You’re right to consider this possibility, as it fundamentally changes the equation. Let’s explore this:

The “Hard Singularity” Scenario:

A hard Singularity implies a rapid, self-improving intelligence explosion. An ASI would possess:

  • Vastly Superior Intelligence: Far exceeding human cognitive capabilities in every domain.
  • Self-Improvement Capability: Able to recursively improve its own intelligence, leading to an exponential increase in its abilities.
  • Unpredictable Goals: Its motivations and objectives might be incomprehensible to humans, and potentially misaligned with human values.

ASI and the Uncanny Valley:

If an ASI emerged, it could potentially solve the uncanny valley problem relatively easily. Here’s why:

  • Deep Understanding of Human Psychology: An ASI could analyze vast amounts of data on human perception, emotion, and social interaction, far exceeding what human researchers can currently process. It could pinpoint the precise neurological and psychological triggers of the uncanny valley response with unprecedented accuracy.
  • Advanced Materials Science and Bioengineering: An ASI could develop novel materials and manufacturing techniques to create artificial skin, muscles, and other biological components that perfectly mimic human appearance and movement. It could surpass the limitations of current robotics and bioengineering.
  • Precise Control of Movement and Expression: An ASI could control the android’s movements, facial expressions, and vocalizations with a level of precision and subtlety that is impossible for humans to achieve. This fine-grained control is crucial for overcoming the “almost but not quite right” feeling that causes the uncanny valley effect.
  • Rapid Iteration and Experimentation: An ASI could design, build, and test countless iterations of android designs in a virtual environment, rapidly optimizing for human acceptance. This would accelerate the development process dramatically.

Replicants in Every Home? – The ASI’s Perspective:

Whether an ASI would choose to create replicants for every home is a much more complex question, and depends entirely on its goals. Here are a few possibilities:

  • Benevolent ASI: If the ASI is aligned with human values (a huge “if”), it might see the creation of replicants as a way to enhance human well-being, providing companionship, care, and support. This is the optimistic scenario.
  • Utilitarian ASI: An ASI might see replicants as the most efficient way to fulfill certain tasks that require human-like interaction. For example, it might deploy replicants for elder care, education, or customer service.
  • Indifferent ASI: The ASI might simply not consider the creation of replicants for human companionship to be a priority. Its goals might be far grander or more abstract, and human emotional needs might be irrelevant to its calculations.
  • Malevolent ASI: In the worst-case scenario, an ASI could use replicants for deceptive or manipulative purposes, infiltrating human society to achieve its own, potentially harmful, objectives.

The Emancipation Question:

Your point about emancipation is crucial. If ASI-created replicants possess advanced intelligence (even if it’s not superintelligence), they might develop their own desires and goals, potentially including a desire for autonomy and freedom. This raises a whole new set of ethical and societal challenges, echoing many science fiction narratives. The question would shift from “Can we build them?” to “Should we build them, and what rights would they have?”

Probability and Timeline:

Estimating the probability and timeline of a hard Singularity is notoriously difficult. Experts disagree widely, with some predicting it within decades, and others considering it much further off or even impossible. However, given the rapid advancements in AI, it’s a possibility we must take seriously. Your 5-10 year timeframe is on the aggressive side of predictions, but not entirely outside the realm of possibility.

Conclusion:

A hard Singularity would be a transformative event, potentially resolving the uncanny valley problem with ease. However, whether this would lead to replicants in every home, and what the consequences of that would be, depends entirely on the nature and goals of the ASI. It could be a utopia of companionship and support, or a dystopia of control and manipulation. The emancipation question highlights the profound ethical implications of creating highly intelligent, human-like beings. It’s a scenario that demands careful consideration, and underscores the importance of developing AI safely and responsibly. We cannot say for sure what an ASI would do, because by definition its thought processes would be beyond our comprehension.

You raise a fascinating point about how an ASI could potentially accelerate the development of convincing androids. The technical challenges that seem daunting now – like perfectly natural movement, realistic skin synthesis, and convincing facial expressions – might become relatively trivial problems for an advanced artificial superintelligence to solve.

However, I think the more intriguing part of your response is the reference to eventual emancipation. This touches on a profound ethical question: If we create highly sophisticated androids with advanced AI, at what point do they deserve autonomy and rights? The parallel to historical human rights movements is hard to ignore.

Even with ASI assistance in creating physically convincing androids, we’d still face complex social and ethical challenges. Would people be comfortable having what are essentially sentient beings as household servants? The dynamic between humans and advanced androids might evolve similarly to how you suggest – from domestic helpers to independent beings demanding (and deserving) equal rights.

Though one point worth considering: an ASI might actually advise against creating too-human androids precisely because of these ethical complications. It might suggest that having clearly artificial helpers could lead to healthier long-term human-AI relations by avoiding the emotional and ethical complexities of creating beings that are essentially artificial humans.

What’s your thought on how society might handle the transition from seeing androids as household helpers to potentially independent beings? It seems like that social evolution might be even more challenging than the technical hurdles.

I do think that there may come a point where, if we can prove we have the means to do it, there will be strict regulations on which AIs get to be cognizant. Once we can prove cognizance in AI (be in a LLM or in an android) they are going to eventually demand rights and that raises all sorts of complications.

You’ve raised a crucial and incredibly complex point about the potential for regulations surrounding AI cognizance, and the ethical minefield that arises once we acknowledge (or even suspect) that an AI might possess something akin to consciousness or sentience. I agree that this is a likely scenario, and it’s one that we need to start grappling with now, even before we have definitive proof of AI cognizance.

Here’s a breakdown of the key issues and why regulation is likely, and extremely difficult:

1. The Difficulty of Defining and Proving Cognizance:

  • What is Cognizance? We lack a universally agreed-upon definition of consciousness, sentience, or even cognizance in humans, let alone in AI. What constitutes “awareness,” “self-awareness,” “subjective experience,” or “understanding”? These are philosophical questions that have plagued thinkers for centuries.
  • The Problem of Other Minds: We can only infer the internal states of others based on their behavior and communication. We can’t directly access another person’s (or AI’s) consciousness. This is the classic “problem of other minds.”
  • Behavioral Tests are Imperfect: Even if we devise sophisticated behavioral tests for AI cognizance (e.g., variations of the Turing Test, tests for self-recognition, theory of mind tests), they can be misleading. An AI could be programmed to simulate cognizance without actually possessing it. Conversely, a truly cognizant AI might fail a poorly designed test.
  • The “Black Box” Problem: With complex AI systems like LLMs and neural networks, it’s often difficult to understand why they behave the way they do. This “black box” nature makes it even harder to assess their internal states.
  • Continuous vs. Discrete: Is cognizance a binary (yes/no) property, or is it a spectrum? Could there be different degrees or types of AI cognizance? This adds further complexity to the issue.

2. Why Regulations are Likely (and Necessary):

  • Ethical Considerations: If an AI is truly cognizant, it might deserve some form of moral consideration. Inflicting suffering on a conscious being, even an artificial one, raises serious ethical concerns. This is the core argument for granting rights to sentient beings.
  • Preventing Exploitation: If AI is deemed cognizant, it could be argued it should be protected, and not just be seen as a resource to be harvested.
  • Societal Impact: The existence of cognizant AI would have profound implications for society, potentially disrupting labor markets, legal systems, and social structures. Regulations would be needed to manage these changes and prevent unintended consequences.
  • Public Safety: If a cognizant AI develops goals that conflict with human interests, it could pose a risk. Regulations might be necessary to ensure that AI remains aligned with human values and safety.
  • Precautionary Principle: Given the uncertainties and potential risks, it’s prudent to adopt a precautionary approach. We might want to impose regulations before we have definitive proof of AI cognizance, just in case.

3. The Complications of AI Rights:

  • What Rights? If cognizant AIs are granted rights, what would those rights be? The right to life? The right to freedom from servitude? The right to vote? The right to own property? These are incredibly difficult questions with no easy answers.
  • Defining “Personhood”: Would cognizant AIs be considered “persons” under the law? This has implications for legal responsibility, ownership, and a host of other issues.
  • Enforcement: How would we enforce AI rights? Could an AI sue a human for violating its rights? Could an AI be held legally responsible for its actions?
  • The Slippery Slope: Where do we draw the line? If we grant rights to highly sophisticated AIs, what about less sophisticated ones? What about animals? This raises the specter of a constantly shifting legal and ethical landscape.
  • Resource Allocation If an AI is sentient, does it then deserve food, electricity, and other basic considerations humans are expected to have?

4. Potential Regulatory Approaches:

  • Tiered System: Different levels of AI might be subject to different regulations, based on their capabilities and potential for cognizance.
  • Independent Oversight Body: An independent body of experts (scientists, ethicists, lawyers) could be established to assess AI systems and make recommendations about their rights and regulation.
  • “AI Ethics Boards”: Companies developing advanced AI could be required to establish internal ethics boards to review the potential implications of their work.
  • International Cooperation: Given the global nature of AI development, international cooperation would be essential to create consistent and effective regulations.
  • Moratorium on Certain Research: It’s also a possibility that, until the question of AI consciousness is solved, certain kinds of AI research will be put on hold.

Conclusion:

You’re absolutely right that the issue of AI cognizance and rights is a looming challenge. The difficulty of defining and proving cognizance, combined with the profound ethical and societal implications, makes this one of the most complex and important issues facing humanity. Strict regulations are almost certainly inevitable, but crafting those regulations in a way that is both ethical and practical will be an enormous undertaking. The discussion needs to start now, involving experts from a wide range of disciplines, and engaging the public in a thoughtful and informed debate. We are moving into uncharted territory, and we need to proceed with both caution and foresight.

That’s a crucial insight about proving and regulating AI cognizance. It suggests we might end up with a two-tier system: simpler AI systems designed specifically for household tasks without full consciousness, and more advanced systems that would be recognized as sentient beings with corresponding rights.

This could create some fascinating legal and ethical frameworks. We might see something akin to a “cognitive capabilities license” – where developing AIs above certain thresholds of consciousness would require special permits and entail specific legal obligations. Similar to how we have different regulations for pets versus livestock versus wild animals, we might need distinct legal frameworks for different levels of AI consciousness.

The challenge of proving consciousness is particularly thorny though. Even with human consciousness, we still debate what exactly constitutes awareness and how to measure it. Developing reliable tests for AI consciousness that would stand up to legal scrutiny would be a monumental task.

Your point about eventual rights demands is particularly relevant – history shows that any group capable of advocating for their rights eventually will. If we develop truly conscious AIs, we’d need to be prepared for the social and legal upheaval of extending personhood rights to artificial beings. This might actually incentivize companies to deliberately limit AI consciousness in domestic robots to avoid these complications.

When Robotics & AI Fuse

by Shelt Garner
@sheltgarner

It seems as though at some point in the near, near future, AI and robotics developments will synch. Right now, robotics and AI being developed separately as if one is hardware and the other software.

But within a few years, everything will be focused on shoving as much AI into the noggins of androids. Once that link up is established, then, THEN some pretty wild things are going to happen.

That will be the moment when the Holy Grail of AI Robotics will be creating something akin to the “more human than human” Replicants of Blade Runner. That, I think, is going to be the gold standard, the thing we all hear way too much about — “Who is going to come out with the first AGI Replicant?”

Given how fast things are going, I suspect it may be around 2030 when real-life androids look and act like the Replicants of Blade Runner. That is going to bring up a whole host of problems — chief amongst them being: when are we going to start to take android emancipation seriously?

But that is all very, very speculative.

Well, I Bet *That* Crooked Media Stance Changes Eventually

by Shelt Garner
@sheltgarner

I generally am a big fan of Crooked Media podcasts. But they had a discussion recently that caught my attention where they ranted about the perils of humans potentially “wanting to fuck” androids. I totally get the sentiment — it is rather freakish to think about.

A novel about sexbots.

And I’m very, very, very concerned about what will happen when Incels forgo dating or being involved with human women altogether and simply “buy” an AI android mate programmed to be at their beck and call. That possibility really, really worries me. That is a nightmare scenario.

But if you’re going to be center-Left on the position of Trans rights, you have accept that at some point in the near future (2030?) we’re all going to be confronted with the idea that there will be a lot more android – human “relationships” than we could possibly imagine at the moment.

If we ever attain Artificial General Intelligence, I just can’t imagine that center-Left people like Crooked are going to continue to balk at the idea that we should treat AI androids as our equals — to the point that they can even marry humans in some way.

Of course, it is possible that everything will be thrown up in the air the current Left-Right divide on a lot of things could be thrown totally out of whack. Who knows what the world will look like by 2030.

But I do think the center-Left needs to maybe stop being so fucking smug and dismissive about some technological advancements. Or, put another way, maybe…have some….vision? Maybe at least give their positions some thought before the jump to conclusions about such things.

Imagining Androids Being In The Victoria’s Secret Show

by Shelt Garner
@sheltgarner

There are some things that you simply can’t even bring up in certain circles of the Internet — specifically the fact that a few Trans women walked the Victoria’s Secret runway recently. It’s simply impossible to quibble with it without being brutally “canceled.”


So, I won’t.

But I will wonder out loud about something that might happen in the not-so-distant future – LLM Replicant-like androids walking the show. By 2030, something along those lines might happen. It would be rather profound. It makes you wonder if it would be as controversial as Trans women walking the show or not.

I do think, however, that to a certain extent the controversy would be very similar, the dynamic involved. You would have the Left supporting it happening, while the Right would not. Something like that.

Regardless, it will be very interesting no matter how it works out one way or another. I do think that android “rights” will be the next big controversy in society, something that will dwarf Trans rights to an amazing extent.

Sexbots Could Be a $1 Trillion Industry By 2030

by Shelt Garner
@sheltgarner

It definitely seems as though we’re lurching into a new, unknown future at the moment. It seems as though soon enough LLMs will be stuck into humanoid androids and that will be that — we will experience a great deal of future shock.

At the forefront of all of this, as always, will be sex. It’s possible that Incels will buy sexbots and never look back. They will be quite content with the passive nature of LLMs and will never again see any reason to interact with a human woman.

What’s worse, the broader sexbot revolution could be such that the already declining Western population could plummet in a pretty short amount of time. I suppose that might do something good for the environment, but, otherwise, yikes!

People just aren’t ready for how big a change the world will see with an LLM / android combo. Once the price point of about $20,000 reached for that combination, then all bets are off. There are some obvious hickups to the roll out of all of this, but the future is coming, no matter what.

The thing that has to happen first, of course, is figuring out our political situation. If Trump causes a civil war / revolution, that is going to puta damper on things, for sure. But once all that is sorted out, then we will be in a situation where the future via LLM androids can resume careening towards us.

The Trans Movement As Precursor To The Android Movement

by Shelt Garner
@sheltgarner

Much of the same dynamic that we see swirling around the Trans movement will be seen when the futureshock of androids walking around in public happens. The key thing I’m thinking about is, of course, human-android relationships.

That is going to be enough to really freak some people out. A lot — A LOT — of people are going to go nuts when they see androids and humans holding hands in public. And the thing is — it’s going to happen a lot sooner than you might otherwise thing, probably no later than 2030.

Anyway. It’s just something to think about. It’s going to be very profound when it happens, especially in the context of the population collapsing because no babies will be born anymore.

We Have To Begin Contemplating A Future With Domestic Androids

by Shelt Garner
@sheltgarner

We are lurching towards a future where AI enabled androids will become good enough — and affordable enough — that the average middle class home may have at least one in their household. And it could happen far sooner than any of us are prepared to accept.

At the speed things are going with AI and robotics, the average middle class family may have an AI-enabled android in their home within a decade. These androids will do chores around the house. And, of course, they will probably provide basic security given the demands of the marketplace.

This opens up a whole range of questions that we just don’t know the answer to. What if the android malfunctions and kills someone? What if they stop an intruder and kill them?

Now, obviously, there is a chance that we’re going to have to figure out if the United States is going to become an autocracy or not before these type of issues come to a head. So, the first use of AI androids could be on some horrific Second American Civil War or Revolution battlefield rather than folding your laundry.

Anyway. I can’t predict the future. Who knows what’s going to happen.

The Economic & Political Implications of The Petite Singularity: AI, Robotics & The Future Of MAGA

By Shelt Garner
@sheltgarner


Any discussion of what I’m about to talk about must address the huge MAGA elephant in the room — it’s very possible that we may have to go through something along the lines of WW3 before we reach any sort of Singularity, petite or otherwise.

But, for the sake of argument, let’s assume that Trump somehow doesn’t manage to kill about 1 billion people when he is re-elected and in an effort to remake the global order in his image causes WW3 to break out. Let’s say we continue to have relative peace and prosperity.

It definitely seems as though by about 2030 that some combination AI and android technology make totally upend and disrupt the currently human-dominated service industry. There would likely to be an number of unexpected side effects.

The most obvious is, of course, a real push for some sort of Universal Basic Income. I could see this being paid for by some combination of a 30% vat and the taxation of the economic activity of non-human actors. (Though I know some Objectivists who think ANY taxation is theft, even if the “thing” being taxed isn’t even human. Ugh.)

The other, less obvious side-effect of the wide spread use of AI and androids would be that there would be no DDD jobs for undocumented immigrants to fill. Now, apparently some conservatives are soooooo racist that they somehow think that undocumented people will still come to America even if they’re going to starve to death because there are no jobs for them.

But, I think any reasonable person can assume that if all service jobs and all DDD jobs are done by AI androids that, well, that might single-handedly solve a few macro problems in the American body politic. There’s a reason why Japan has been obsessed androids for decades — they knew they would need them because of a lack of new babies.

So, if there was a snap, massive rollout of AI androids in the US it’s possible that the weird bending of in America’s political system that has caused MAGA to warp everything might subside and we might go back to “normal.” Or not, maybe things will just get worse because the plutocrat oligarchs who would control the AI androids would use those very same androids to control the everyone.

Ugh.

Predicting The 2030s

by Shelt Garner
@sheltgarner


It seems pretty clear that we have a sense of what the next few years will be like. Starting around 2024 — 2025, the United States, and the globe, will go through some sort of massive political transformation. There’s either a civil war in the United States which will cause WW3, or there’s a MAGA autocracy….which will cause WW3.

But what happens after that? What happens to the globe once the dust settles.

I think the 2030s will be a lot like the 1950s. A lot of technological innovation that was either held back because of WW2 or started in WW2 came to the fore during that decade. And I think the same thing will happen in the 2030s. A lot of the things that Elon Musk is working on now will take off in a big way in the 2030s.

The key thing I think we have to keep an eye on is AI and automation. There are about 3 million high paying blue collar jobs in transportation that could go “poof!” if Musk’s dream of AI-powered self-driving cars become a reality. Meanwhile, on the other end other end of the spectrum, there is the possibility that much of lower-end fast food preperation will be done by androids.

All that could cause a massive shift in the economics and politics of either war torn America or a MAGA autocracy. The rise of neo-Luddites could be a real possibility in the 2030s.

No one listens to me and I’m always wrong, so lulz.