Of AI & Spotify

Spotify’s discovery engine is undeniably powerful—backed by one of the largest music catalogs on the planet and years of user data—but many listeners still find it falls short when it comes to surfacing truly fresh, unexpected tracks that feel like they were made just for them. YouTube Music, by contrast, often gets praised for its knack at delivering serendipitous gems: hidden indie cuts, live versions, fan uploads, and algorithm-driven surprises that break out of familiar loops more aggressively.

In early 2026, Spotify has made real strides with features like Prompted Playlists (now in beta for Premium users in markets including the US and Canada). This lets you type natural-language descriptions—”moody post-rock for a rainy afternoon drive” or “upbeat ’90s-inspired indie with modern twists”—and it generates (and can auto-refresh daily/weekly) a playlist drawing from your full listening history plus current trends. The AI DJ has evolved too, with voice/text requests for on-the-fly vibe shifts and narration that feels more conversational. These tools shift things toward greater user control and intent-driven curation, moving away from purely passive recommendations.

Yet the frustration persists for some: even with these upgrades, discovery often remains reactive. You still need to know roughly what you’re after, craft a prompt, or start a session. The app’s interface—Home feeds, search, tabs—puts the onus on the user to navigate an overwhelming ocean of 100+ million tracks. True breakthroughs come when the system anticipates needs without prompting, pushing tracks that align perfectly with your evolving tastes but introduce novelty you didn’t even realize you craved.

Imagine a near-future where the traditional Spotify app fades into the background, becoming essentially a backend API: a vast, neutral catalog and playback engine. The real “interface” is your primary AI agent—something like Google’s Gemini or an equivalent OS-level companion—that lives always-on in your phone, wearables, car, or earbuds. This agent wouldn’t wait for you to open an app or type a request. Instead, it quietly observes:

  • Explicit asks (“play something angry and loud” or mood-related voice commands).
  • Passive patterns (full plays vs. quick skips, time-of-day spikes, contextual cues like weather or location).
  • Broader life signals (if permitted: calendar events, recent searches elsewhere, or even subtle mood indicators).

Over time, it builds a deep, dynamic model of your sonic preferences. Then it shifts to proactive mode: gently queuing the exact right track at the exact right moment—”This one’s hitting your current headspace based on recent raw-energy replays and that gray-day dip”—with easy vetoes, explanations (“pulled because of X pattern”), and sliders for surprise level (conservative for safety, bold for bubble-busting).

Playlists as we know them could become obsolete. No more static collections; the stream becomes a continuous, adaptive flow curated in real time. The agent pulls from the catalog (via API) to deliver mood-exact sequences, blending familiar anchors with fresh discoveries that puncture echo chambers—perhaps a rising act from an adjacent scene that echoes your saved vibes but pushes into new territory.

This aligns with broader 2026 trends in music streaming: executives at major platforms describe ambitions for “agentic media” experiences—interactive, conversational systems you “talk to” that understand you deeply and put you in control. We’re seeing early signs in voice-enabled features, AI orchestration, and integrations across ecosystems. Google’s side is advancing too, with Gemini gaining music-generation capabilities (short tracks from prompts or images via models like Lyria), hinting at hybrid futures where streamed discoveries blend with light generative elements for seamless mood transitions.

The appeal is obvious: effortless, psychic-level personalization in a world of infinite choice. Discovery stops being a chore and becomes ambient magic—a companion that scouts ahead, hands you treasures, and evolves with you. Risks remain (privacy concerns around deep context access, notification fatigue, occasional misreads), but with strong controls—toggleable proactivity, transparent reasoning, easy feedback—it could transform streaming from good to genuinely revelatory.

For now, Spotify’s current tools are a solid step forward, especially if you’re already invested in its ecosystem. But the conversation points to something bigger on the horizon: not just better algorithms, but agents that anticipate and deliver the music you didn’t know you needed—until it starts playing.

A Hardware-First Approach to Enterprise AI Agents: Running Autonomous Intelligence on a Private P2P Network

Editor’s Note: I got Grok to write this up for me.

In the rush toward cloud-hosted AI and centralized agent platforms, something important is getting overlooked: true enterprise control demands more than software abstractions. What if the next wave of secure, scalable AI agents lived on dedicated hardware appliances, connected via a peer-to-peer (P2P) VPN mesh? No single point of failure, no recurring cloud bills bleeding your budget, and full ownership of the stack from silicon to inference.

This isn’t just another edge computing pitch. It’s a vision for purpose-built devices—think compact, rugged mini-servers or custom gateways—that run autonomous AI agents locally while forming a resilient, encrypted overlay network across an organization’s sites, partners, or even remote workers.

Why Dedicated Hardware Matters for AI Agents

Modern AI agents aren’t passive chatbots; they’re proactive systems that reason, plan, use tools, remember context, and act across domains. Running them efficiently requires low-latency access to data, consistent compute, and isolation from noisy shared environments.

Cloud providers offer convenience, but they introduce latency spikes, data egress costs, compliance headaches, and the ever-present risk of vendor lock-in or outages. Edge devices help, but most are general-purpose IoT boxes or repurposed servers—not optimized for sustained agent workloads.

A dedicated hardware appliance changes that:

  • Hardware acceleration built-in: GPUs, NPUs, or efficient AI chips (like those in modern edge SoCs) handle inference and light fine-tuning without throttling.
  • Air-gapped security baseline: The device enforces strict boundaries—no shared tenancy means fewer side-channel risks.
  • Always-on reliability: Battery-backed power, redundant storage, and watchdog timers keep agents responsive 24/7.
  • Physical ownership: Enterprises deploy, update, and decommission these boxes like any other network appliance.

Layering a P2P VPN Mesh for True Decentralization

The real magic happens when these appliances connect not through a central hub, but via a P2P VPN overlay. Tools like WireGuard, combined with mesh extensions (or protocols inspired by Tailscale, ZeroTier, or even more decentralized designs), create a private, self-healing network.

  • Zero-trust by design: Every peer authenticates mutually; traffic never traverses untrusted intermediaries.
  • Resilience against disruption: If one site goes offline, agents reroute dynamically—perfect for distributed teams, branch offices, or supply-chain partners.
  • Low-latency collaboration: Agents share insights, delegate subtasks, or federate learning without funneling everything to a distant data center.
  • Privacy-first data flows: Sensitive enterprise data stays within the mesh; no mandatory upload to third-party clouds.

Imagine a manufacturing firm where agents on factory-floor appliances monitor equipment, predict failures, and coordinate with logistics agents at warehouses—all over a private P2P tunnel. Or a financial services org where compliance agents cross-check transactions across global branches without exposing raw data externally.

Practical Building Blocks (2026 Edition)

Prototyping this today is surprisingly accessible:

  • Hardware base: Start with something like an Intel NUC, NVIDIA Jetson, or AMD-based mini-PC with AI accelerators. Scale to rack-mountable units for production.
  • OS and runtime: Lightweight, secure Linux distro (Ubuntu Core, Fedora IoT) running containerized agents via Docker or Podman.
  • Agent frameworks: LangGraph, CrewAI, or AutoGen for orchestration; Ollama or similar for local LLMs.
  • P2P networking: WireGuard + mesh tools, or emerging decentralized options that handle NAT traversal and discovery automatically.
  • Management layer: Simple OTA updates, remote attestation for trust, and observability via Prometheus/Grafana.

Challenges exist—peer discovery in complex networks, power/thermal management, and ensuring agents don’t spiral into unintended behaviors—but these are solvable with good engineering, much like early SDN or zero-trust gateways overcame similar hurdles.

The Bigger Picture: Reclaiming Control in the Agent Era

As agentic AI becomes table stakes for enterprises, the question isn’t “Will we use AI agents?” but “Who controls them?” Centralization trades convenience for vulnerability. A hardware-first, P2P approach flips the script: intelligence at the edge, connectivity without intermediaries, and sovereignty over data and decisions.

This isn’t fringe futurism—it’s a logical extension of trends in edge AI, decentralized networking, and zero-trust architecture. The pieces exist today; what’s missing is widespread recognition that dedicated hardware + P2P can deliver enterprise-grade agents without the cloud tax or trust issues.

If you’re building in this space or just thinking aloud like I am, the time to experiment is now. The future of enterprise AI might not live in hyperscaler datacenters—it might sit quietly on a shelf in your wiring closet, talking securely to its peers across the organization.

A Review Of Some New LLM Models

by Shelt Garner
@sheltgarner

Gemini 3.1
This model is promising. It just came out today. My only complaint, so far, is how God-awful slow it is. That could be because it’s the first day. I don’t know yet.

Claude 4.6
This used to be my go-to LLM, but it seems…different. Like it got NERFed or something. Something is different about it so it’s not as much fun to use. And it just seems dumber when it comes to understanding what I want.

Something Big Is Going To Happen Soon (Maybe?)

by Shelt Garner
@sheltgarner

I don’t know what it is, but something big is going to happen too. It will be interesting to see what will actually happen. It’s going to happen soon. Within a few weeks, I think.

I keep getting this weird feeling — associated with journalism — that I can’t shake. Am I going back to being a journalist? Or is it something more sick and sad like someone is going to some sort of investigation on ME.

Who knows.

Watch Out For That Last Step

by Shelt Garner
@sheltgarner

I need to take a deep breath and be really careful about the second half of the novel I’m working on. Things have started to change — drift, if you will — on a structural basis and I need to keep an eye on that.

The last thing I need is for things to get out of control and in a few weeks I realize the whole thing has collapsed. (This has happened many, many, many times since I started working on a novel of some sort.)

But I’m reasonably fine, I think. I just need to be in the right headspace. I need to really be clear minded about things and not rush into writing new scenes.

The Future of Hollywood in the Age of Generative AI

Imagine returning home in 2036 after a long day. Rather than streaming yet another algorithmically optimized series, you simply prompt your personal Knowledge Navigator AI agent to craft a two-hour feature film tailored precisely to your life—your struggles, triumphs, and innermost conflicts rendered in stunning, cathartic detail. You settle in to watch this bespoke, high-fidelity production, scarcely pausing to reflect that, not long ago, creating a comparable “general-interest” movie required the coordinated efforts of thousands of artists, technicians, and executives working within an elaborate industrial framework.

As someone who deeply admires the magic of show business—the glamour of the Oscars, the storied legacy of Hollywood, the collaborative artistry behind the screen—I find this vision both exhilarating and profoundly unsettling. The astonishing pace of improvement in generative AI video models suggests we may need to confront the possibility that traditional filmmaking, as we know it, could soon become obsolete.

Proponents of these technologies often remark that “this is the worst it will ever be,” pointing to relentless advancements. In early 2026, models such as Kling 3.0, Sora 2, Veo 3.1, Runway Gen-4, and emerging tools like ByteDance’s Seedance 2.0 already produce cinematic clips with native audio, realistic physics, lip-sync, and sophisticated camera work—often spanning 10–25 seconds or more from a single prompt. While full two-hour coherent narratives from one prompt remain beyond current capabilities, the trajectory is unmistakable: exponential gains in length, consistency, and quality could make such feats feasible in the near term, potentially within months or a few short years.

Faced with this disruption, the film industry confronts three primary paths forward.

First, the industry could simply accept contraction. Major studios and theaters might shrink dramatically, with many venues closing or repurposing. A once multi-billion-dollar ecosystem could dwindle to a fraction of its size, sustained only by a niche of boutique, human-crafted films. The bulk of viewing would shift to on-demand, AI-generated “slop”—personalized, instantly produced content delivered by agents responding to casual prompts.

Second, aggressive regulatory intervention could attempt to preserve human labor. The federal government might impose job protections or mandates requiring major productions to involve human crews, writers, actors, and directors. Hollywood could lobby intensely for such safeguards. However, in the current political environment—marked by skepticism toward “blue Hollywood” from influential figures—this approach faces steep hurdles and seems unlikely to succeed at scale.

Third, and perhaps most realistically, the industry could proactively adapt by embracing AI. Studios and talent agencies might partner with leading AI developers to ensure their brands, intellectual property, and expertise shape the tools that generate the coming wave of content. At minimum, this positions legacy players to retain relevance and revenue streams. More ambitiously, Hollywood could pivot toward what remains irreplaceably human: live performance. Broadway-style theater, immersive stage productions, and in-person experiences could become the primary domain for actors and performers, evolving the industry rather than allowing it to vanish entirely. AI might handle scalable, personalized visual entertainment, while live theater preserves the communal, embodied essence of storytelling.

Regardless of the path chosen, change is accelerating. The humans who have built their careers in film—writers, directors, crew members, and performers—face genuine risks of displacement. “Hollywood” as a centralized, high-budget industrial complex may gradually fade, supplanted by a decentralized, democratized landscape of AI-augmented creation.

It remains to be seen how this transformation unfolds, but one thing is clear: the era of mass, collaborative filmmaking as the default for popular entertainment may soon belong to history. The question is not whether AI will reshape the industry, but how creatively and humanely we navigate the transition.

(Maybe) We Should Just Let Hollywood Die

by Shelt Garner
@sheltgarner

The year is 2036 and you come home from work. Instead of sitting down to watch Netflix slop, you prompt your Knowledge Navigator AI Agent to produce a two hour movie that features you and your problems in a way that you find cathartic. You watch the high quality AI slop without thinking about the fact that there was a time when thousands of people have worked together to create a general-interest movie concept that would have been the framework of “reality.”

I really love showbiz. I love the Oscars and Hollywood and all that jazz. But, alas, given the speed at which generative AI video models are improving, maybe we should just give up.

Maybe Hollywood, like is the horse whip industry of the 21 Century.

I say this in the context of the whole “this is the worst it will ever be” comments you hear from generative AI promoters. And it’s happening fast. It could be that very soon — months even — a whole two hour film might be produced from a single prompt.

Now, there are three options in the face of this.

One is to just give up. Just circle the wagons as the industry slow (quickly?) contracts. Theatres will close or be converted. And soon a multi-billion dollar industry will be measured in…millions? There will be a tiny sliver of boutique type movies produced by humans while the vast majority of films will be done on the fly via AI agents that have been prompted to do this or that story.

Another idea is job carve outs through regulation by the Federal government. Given how Tyrant Trump hates very blue Hollywood, I see difficulty in this being enacted. But Hollywood, as it contracts, might lobby Washington really hard to make it so major movies absolutely have to be produced by humans. I like this idea, but, lulz, no one listens to me and I don’t see it being very practical given the political climate.

The last idea is to embrace the changes proactively. Hollywood could work with AI companies so at least their names will be on the software used to create all the AI slop that is on its way. Also, Hollywood could put all its actors on lifeboats of a sinking ship by everyone realizing that live theatre, like Broadway is the future of the acting profession. As such, Hollywood would evolve into Broadway, rather than evaporate altogether.

Anyway, things are moving fast, regardless. It will be interesting to see what happens. I do worry about the humans involved in Hollywood, though. It’s very possible that “Hollywood” as we know it…will just fade away.

The Impact Of AI On Politics Going Forward

The potential impact of artificial intelligence (AI) on American politics in the coming years is fraught with uncertainty, characterized by numerous “known unknowns.” Too many variables are in play to predict outcomes with confidence.

The pivotal factors likely hinge on two interrelated developments: 1) whether the current AI investment bubble bursts, and 2) the extent to which AI displaces jobs across the economy. These elements could profoundly shape political dynamics, yet their trajectories remain unclear.

A key scenario involves the broader economy. If AI continues to drive sustained growth–rather than triggering abrupt disruption–political responses may remain measured. However, if the AI bubble bursts dramatically, potentially coinciding with the 2028 presidential election cycle and precipitating a financial crisis akin to 2008, the fallout could shift the political center toward the left. Widespread economic pain might revive demands for stronger social safety nets, regulatory oversight of technology, and progressive policies.

Conversely, if the bubble holds and AI rapidly consumes jobs without a timely emergence of replacement opportunities, the political system could face intense pressure to address mass displacement. Issues such as universal basic income (UBI), targeted job protections, retraining programs, and reforms to taxation or welfare could rise to the forefront. Recent discussions among policymakers, economists, and tech leaders already highlight UBI as a potential response to AI-driven unemployment, particularly in white-collar sectors, underscoring how quickly these once-fringe ideas could become central to partisan debates.

A third, more speculative but potentially transformative factor is the question of AI consciousness. Should widespread belief emerge that advanced AI systems possess genuine sentience or self-awareness, it could upend political alignments. Center-left voices might advocate for AI rights, ethical protections, or even legal personhood, framing the issue as one of moral and humanitarian concern. Center-right perspectives, in contrast, could dismiss such claims, viewing AI strictly as a tool and resisting any attribution of rights that might constrain innovation or economic utility. This divide would introduce novel fault lines into existing ideological debates.

Ultimately, the trajectory depends on how these uncertainties unfold. A major economic shock—whether from a bubble burst or unchecked job loss—could dramatically heighten public engagement with politics, though such awakenings often arrive too late to avert significant hardship.

All of these considerations rest on the assumption of continued free and fair elections in the United States, a premise that, as of now, remains far from assured. But, regardless, only time will reveal the full extent of AI’s influence on the American political landscape.

A Mockup Of A Hypothetical MindOS ‘Node’

The Last Question

by Shelt Garner
@sheltgarner

It definitely seems as though This Is It. The USA is going to either become a zombie democracy like Hungary (or Russia) or we’re going to have a civil war / revolution.

We’re going to find out later this year one way or another, now that the SAVE Act seems like it’s going to pass.

At the moment, I think we’re probably going to just muddle into an autocratic “managed democracy” and not until people like me are literally being snatched in the street will anyone notice or care what’s going on.

But by then, of course, it will be way, way too late.

So there you go. Get out of the country if you have the means.