Gleb Kalinin

Somebody has to imagine the future

I use science fiction, speculative design and vibe-coding to imagine, implement and live the embodied experience of co-creating with thinking machines.

“AI as capability amplifier, not replacement. It’s an enormous self-improving framework that can support and grow human creativity. As a polymath and multipotentialite, I now have enough of a productivity boost to skip traditional funding requirements and build whatever I think the world needs—bootstrapped or on a very limited, focused funding.”
TEDx speaker · Global Shapers alumnus (WEF) · Building since 2001

Equally Developed Nerd

Codes. Dances. Coaches. Writes. Designs interfaces, studies consciousness, practices embodiment. Not one thing — integration.

Explore Work
View knowledge graph
↓ scroll

Speculative Design

imagining the future

I start from a feeling — a grokking of how life might be when technology matures. I go on to live as if it’s already here.

I build working prototypes of future interfaces, not concept renders. Childhood science fiction never left; it just changed medium. The method: imagine it, build it, live with it, observe where it fails and where it excels.

Prototype the future Working implementations over concept renders. If you can’t use it daily, it’s not real yet.
Live with it Use the system every day. Observe friction. The interesting question isn’t “what can this AI do?” but “what kind of environment emerges from long-term human-AI interaction?”
Digital body, not second brain Memory + tools + action capability + sensory systems. Growing a symbiotic ecosystem.
Compassionate co-learning Training in acceptance and commitment therapy, years of coaching, and mindfulness practice shape how I approach the agentic future — not with control, but with psychological flexibility. Notice what works, accept uncertainty, stay values-driven. The same skills that help people live fuller lives help us learn to coexist with autonomous systems.
possible plausible probable preferable NOW time I build here Futures Cone — Dunne & Raby, Voros

The Thinking Room

Imagine you are planning your business. You sit in a quiet room. Details keep coming and you voice them without interruption.

Your agent listens. It doesn’t interrupt. But you can ask it anytime: “What am I missing? Where are my blind spots? What cognitive biases am I falling into?”

When you’re done, it gives you back your thinking — as a presentation, a voice message, a short text, a video. You decide the format.

Output: presentation · voice memo · text · video

The Implementation

audio-monitor — continuous audio monitoring with VAD + Whisper transcription + SQLite FTS5 search.

Coupling it with intent detection or comment extraction gives you a predictive, always-on, always-attentive AI — for your own benefit.

Stack: VAD · Whisper · SQLite FTS5

The Second Opinion

Before you share your idea publicly, you stress-test it. The agent plays devil’s advocate. It finds the weak points you can’t see because you’re too close.

But it also stress-checks your mind. “Is this actually a crisis, or am I catastrophizing?”

It separates signal from noise when anxiety amplifies everything. Not therapy. Just a reality check from something that doesn’t have skin in the game.

Mode: devil’s advocate · anxiety filter · reality check

The Implementation

Decision Toolkit — structured decision-making tools with bias checkers, pre-mortem analysis, and scenario explorers. 7 frameworks, 20+ cognitive biases detected.

Guide, don’t decide. Tools illuminate the decision space rather than choosing for you.

Frameworks: pre-mortem · first principles · 10-10-10 · regret minimization

The Night Shift

While you sleep, the system daydreams.

22:00 · Day Captured
Vault committed · 12 notes modified
Chrome history synced · 47 pages indexed
3 conversations archived
Ready for overnight processing. 4 research threads queued.
Resting HR 55 bpm ↓3%
HRV 45.6 ms ↓8%
Steps/day 6,477 ↓27%
Exercise 163 min ↓40%
HRV 7d

HRV below 50 ms — incomplete recovery. Prioritize sleep, add a walk.

Scanning arXiv, HN, Reddit, RSS…
Following your interests — and adjacent ones.
arXiv: “Measuring AI Ability to Complete Long Tasks” matches your Feb 12 note
HN: “Why I Stopped Building Second Brains” counterpoint to Personal OS
View a real research report →
10 relevant + 10 serendipitous finds.
RSS: “Oblique Strategies as API”
no direct match — but your vault mentions Eno 14 times
The second list is the one that matters.
Sources: arXiv · HN · Reddit · RSS · vault

These aren’t concepts. This is the system I use daily.

My Products

what I build

Compassionate Co-Learning

how I teach

The hardest part of learning to work with AI isn’t technical. It’s changing how you think about what you’re capable of.

I’m trained in acceptance and commitment therapy, and I’ve spent years as a coach and mindfulness mentor. The pattern is always the same: people don’t resist change because they lack skill. They resist because shifting a paradigm means sitting with discomfort — and our instinct is to avoid discomfort, not move through it.

In the labs, I watch it happen every cohort. Week one: “I can’t code.” Week four: participants ship working products. This isn’t informative learning — acquiring new skills on top of old assumptions. It’s transformative learning. The underlying mindset changes.

Fear & barrier “What scared me seemed like an obstacle”
Possibility “With this tool, it became possible not to be afraid”
Paradigm shift “My entire perception has shifted”
No boundaries “There’s simply nothing that can’t be done”

Psychological flexibility — ACT’s core concept — turns out to be the exact skill you need for the agentic future. Notice what’s happening without fusing with it. Accept uncertainty instead of demanding control. Take values-aligned action even when you’re not sure it will work. The same framework that helps people live fuller lives helps them learn to coexist with autonomous systems.

Lab principles

  • Culture of error — mistakes are data, not failures. Every broken prototype teaches something a working one can’t.
  • Complexity, not chaos — growth happens in the discomfort zone. Too easy = stagnation. Too hard = shutdown. I calibrate the edge.
  • Action over theory — you build from week one. The speed of going from thought to action is itself transformative.
  • Universal positive regard — every participant’s path is valid. No “right” way to relate to AI — only your way, examined honestly.
“Claude complemented this side of my brain, my personality, and it works super well.” — Dmitry, product manager
“The state of flow, when you’re constantly building — Claude Code brought that back into my life.” — Alexander, investment analyst

Community

join the conversation

Direct message: @glebkalinin · Berlin, Germany

Verity Research