Sunday, November 09, 2025

Pseudo TDD with AI

Exploring Test-Driven Development with AI Agents

Over the past few months, I've been experimenting with a way to apply Test-Driven Development (TDD) by leveraging artificial intelligence agents. The goal has been to maintain the essence of the TDD process (test, code, refactor) while taking advantage of the speed and code generation capabilities that AI offers. I call this approach Pseudo TDD with AI.

How the Process Works

The AI agent follows a set of simple rules:

  1. Write a test first.
  2. Run the test and verify that it fails.
  3. Write the production code.
  4. Run the tests again to verify that everything passes.

I use the rules I defined in my base setup for augmented coding with AI. With these base rules, I can get both the Cursor agent and Claude Code to perform the TDD loop almost completely autonomously.

The refactoring part is not included automatically. Instead, I request it periodically as I observe how the design evolves. This manual control allows me to adjust the design without slowing down the overall pace of work.

Confidence Level and Limitations

The level of confidence I have in the code generated through this process is somewhat lower than that of TDD done manually by an experienced developer. There are several reasons for this:

  • Sometimes the agent doesn't follow all the instructions exactly and skips a step.
  • It occasionally generates fewer tests than I would consider necessary to ensure good confidence in the code.
  • It tends to generalize too early, creating production code solutions that cover more cases than have actually been tested.

Despite these issues, the process is very efficient and the results are usually satisfactory. However, it still doesn't match the confidence level of fully human-driven TDD.

Supporting Tools

To compensate for these differences and increase confidence in the code, I rely on tools like Mutation Testing. This technique has proven very useful for detecting parts of the code that weren't adequately covered by tests, helping me strengthen the reliability of the process.

Alternative Approaches Explored

In the early phases of experimentation, I tried a different approach: directing the TDD process myself within the chat with the AI, step by step. It was a very controlled flow:

"Now I want a test for this."
"Now make it pass."
"Now refactor."

This method made the process practically equivalent to traditional human TDD, as I had complete control over every detail. However, it turned out to be slower and didn't really leverage the AI's capabilities. In practice, it worked more as occasional help than as an autonomous process.

Next Steps

From the current state of this Pseudo TDD with AI, I see two possible paths forward:

  1. Adjust the rules and processes so the flow comes closer to human TDD while maintaining AI speed.
  2. Keep the current approach while observing and measuring how closely it actually approximates a traditional TDD process.

In any case, I'll continue exploring and sharing any progress or learnings that emerge from this experiment. The goal is to keep searching for that balance point between efficiency and confidence that collaboration between humans and AI agents can offer.

Related Content

My Base Setup for Augmented Coding with AI

Repository: eferro/augmentedcode-configuration

Over the last months I've been experimenting a lot with AI-augmented coding — using AI tools not as replacements for developers, but as collaborators that help us code faster, safer, and with more intention.

Most of the time I use Cursor IDE, and I complement it with command-line agents such as Claude Code, Codex CLI, or Gemini CLI.

To make all these environments consistent, I maintain a small open repository that serves as my base configuration for augmented coding setups:

👉 eferro/augmentedcode-configuration

Purpose

This repository contains the initial configuration I usually apply whenever I start a new project where AI will assist me in writing or refactoring code.

It ensures that both Cursor and CLI agents share the same base rules and principles — how to write code, how to take small steps, how to structure the workflow, etc.

In short: it's a simple but powerful way to keep my augmented coding workflow coherent across tools and projects.

Repository structure

augmentedcode-configuration/
├── .agents/
│   └── rules/
│       ├── base.md
│       └── ai-feedback-learning-loop.md
├── .cursor/
│   └── rules/
│       └── use-base-rules.mdc
├── AGENTS.md
├── CLAUDE.md
├── GEMINI.md
├── codex.md
└── LICENSE


.agents/rules/base.md

This is the core file — it defines the base rules I use when coding with AI.

These rules describe how I want the agent to behave:

  • Always work in small, safe steps
  • Follow a pseudo-TDD style (generate a test, make it fail, then implement)
  • Keep code clean and focused
  • Prefer clarity and maintainability over cleverness
  • Avoid generating huge chunks of code in one go

At the moment, these rules are slightly tuned for Python, since that's the language I use most often. When I start a new project in another language, I simply review and adapt this file.

🔗 View .agents/rules/base.md


.agents/rules/ai-feedback-learning-loop.md

This file defines a small feedback and learning loop that helps me improve the rule system over time.

It contains guidance for the AI on how to analyze the latest session, extract insights, and propose updates to the base rules.

In practice, I often tell the agent to "apply the ai-feedback-learning-loop.md" to distill the learnings from the working session, so it can generate suggestions or even draft changes to the rules based on what we learned together.

🔗 View .agents/rules/ai-feedback-learning-loop.md


.cursor/rules/use-base-rules.mdc

This small file tells Cursor IDE to use the same base rules defined above.

That way, Cursor doesn't have a separate or divergent configuration — it just inherits from .agents/rules/base.md.

🔗 View .cursor/rules/use-base-rules.mdc


AGENTS.md, CLAUDE.md, GEMINI.md, codex.md

Each of these files is simply a link (or reference) to the same base rules file.

This trick allows all my CLI agentsClaude Code, Codex, Gemini CLI, etc. — to automatically use the exact same configuration.

So regardless of whether I'm coding inside Cursor or launching commands in the terminal, all my AI tools follow the same guiding principles.

🔗 AGENTS.md
🔗 CLAUDE.md
🔗 GEMINI.md
🔗 codex.md


How I use it

Whenever I start a new project that will involve AI assistance:

  1. Clone or copy this configuration repository.
  2. Ensure that .agents/rules/base.md fits the project's language (I tweak it if I'm not working in Python).
  3. Connect Cursor IDE — it will automatically load the rules from .cursor/rules/use-base-rules.mdc.
  4. When using Claude Code, Codex, or Gemini CLI, they all read the same base rules through their respective .md links.
  5. During or after a session, I often run the AI Feedback Learning Loop by asking the agent to apply the ai-feedback-learning-loop.md so it can suggest improvements to the rules based on what we've learned.
  6. Start coding interactively: I ask the AI to propose small, incremental changes, tests first when possible, and to verify correctness step by step.

This results in a workflow that feels very close to TDD, but much faster. I like to call it pseudo-TDD.

It's not about strict process purity; it's about keeping fast feedback loops, learning continuously, and making intentional progress.

Why this matters

When working with multiple AI agents, it's surprisingly easy to drift into inconsistency — different styles, different assumptions, different "personalities."

By having one shared configuration:

  • All tools follow the same Lean/XP-style principles.
  • The workflow remains consistent across environments.
  • I can evolve the base rules once and have every agent benefit from it.
  • It encourages me (and the agents) to think in small steps, test early, and refactor often.
  • The feedback learning loop helps evolve the rule system organically through practice.

It's a small setup, but it supports a big idea:

"Augmented coding works best when both human and AI share the same working agreements — and continuously improve them together."

Adapting it

If you want to use this configuration yourself:

  1. Fork or clone eferro/augmentedcode-configuration.
  2. Adjust .agents/rules/base.md for your preferred language or conventions.
  3. Point your IDE or CLI agents to those files.
  4. Use .agents/rules/ai-feedback-learning-loop.md to help your agents reflect on sessions and evolve the rules.
  5. Experiment — see how it feels to work with a single, unified, and self-improving set of rules across AI tools.

Next steps

In an upcoming post, I'll share more details about the pseudo-TDD workflow I've been refining with these agents — how it works, what kinds of tests are generated, and how it compares to traditional TDD.

For now, this repository is just a small foundation — but it's been incredibly useful for keeping all my AI coding environments consistent, adaptive, and fast.

Related Content

Mutation Testing: When "Good Enough" Tests Weren't

For weeks, I had been carrying this nagging doubt. The kind of doubt that's easy to ignore when everything is working. My inventory application had 93% test coverage, all tests green, type checking passing. The code had been built with TDD from day one, using AI-assisted development with Claude, Cursor (with Sonnet 4.5, GPT-4o, and Claude Composer), what I like to call "vibecoding". Everything looked solid.

It's not a big application. About 650 lines of production code. 203 tests. A small internal tool for tracking teams and employees. The kind of project where you might think "good enough" is actually good enough.

But something was bothering me.

I had heard about mutation testing years ago. I even tried it once or twice. But let's be honest: it always felt like overkill. The setup was annoying, the output was overwhelming, and the juice rarely seemed worth the squeeze. You had to be really committed to quality (or really paranoid) to go through with it.

This time, though, with AI doing the heavy lifting, I decided to give it another shot.

The First Run: 726 Mutants

I added mutmut to the project and configured it with AI's help. Literally minutes of work. Then I ran it:

$ make test-mutation
Running mutation testing
726/726  🎉 711  ⏰ 0  🤔 0  🙁 0  🔇 15  🔴 0
33.50 mutations/second

Not bad. 711 mutants killed out of 726. That's 97.9% mutation score. I felt pretty good about it.

Until I looked at those 15 survivors.

The 15 Survivors

I ran the summary command to see what had survived:

$ make test-mutation-summary
Total mutants checked: 15
Killed (tests caught them): 0
Survived (gaps in coverage): 15

=== Files with most coverage gaps ===
    5 inventory.services.role_config_service
    4 inventory.services.orgportal_sync_service
    2 inventory.infrastructure.repositories.initiative
    1 main.x create_application__mutmut_6: survived
    1 inventory.services.orgportal_sync_service.x poll_for_updates__mutmut_6: survived
    1 inventory.db.gateway
    1 inventory.app_setup.x include_application_routes__mutmut_33: survived

There they were. Fifteen little gaps in my test coverage. Fifteen cases where my tests weren't as good as I thought.

And remember: this is a 650-line application with 203 tests. If I found 15 significant gaps here, what would I find in a 10,000-line system? Or 100,000?

The thing is, a few months ago, this would have been the end of the story. I would have looked at those 15 surviving mutants, felt slightly guilty, and moved on. The effort to manually analyze each mutation, understand what it meant, and write the specific tests to kill it would have taken days. Maybe a week.

Not worth it for a small internal tool.

But this time was different.

What the Mutants Revealed

Before jumping into fixes, I wanted to understand what these surviving mutants were actually telling me. With AI's help, I analyzed them systematically.

Here's what we found:

In role_config_service (5 survivors):
The service loaded YAML configuration for styling team roles. My tests verified that the service loaded the config and returned the right structure. But they never checked what happened when:

  • The YAML file was missing
  • The YAML was malformed
  • Required fields were absent

The code had error handling for all these cases. My tests didn't verify any of it.

In orgportal_sync_service (4 survivors):
This service synced data from S3. Tests covered the happy path: download file, process it, done. But mutants survived when we:

  • Changed log messages (I wasn't verifying logs)
  • Skipped metadata checks (last_modified, content_length)
  • Removed directory existence checks

The code was defensive. My tests assumed everything would go right.

In database and infrastructure layers (6 survivors):
Similar story. Error paths that existed in production but were never exercised in tests:

  • SQLite connection failures
  • Invalid data in from_db_row factories
  • 404 responses in API endpoints

Classic case of "it works, so I'm not testing the error cases."

The pattern was clear: I had good coverage of normal flows, but my tests were optimistic. They assumed the happy path and left the defensive code untested.

This is what deferred quality looks like at the micro level. Like Deming's red bead experiment (where defects came from the system, not the workers), these weren't random failures. They were systematic gaps in how I verified the system. Every surviving mutant is a potential bug waiting in production, interrupting flow when it surfaces weeks later. The resource efficiency trap: "we already have 93% coverage" feels cheaper than spending 2-3 hours... until you spend days debugging a production issue that a proper test would have caught.

The AI-Powered Cleanup

But this time I had AI. So I did something different.

I asked Claude to analyze the surviving mutants one by one, understand what edge cases they represented, and create or modify tests to cover them. I just provided some guidance on priorities and made sure the new tests followed the existing style.

(The app itself had been built using a mix of tools: Claude for planning and architecture, Cursor with different models for implementation. But for this systematic mutation analysis, Claude's reasoning capabilities were particularly useful.)

In about two or three hours, we had addressed all the key gaps:

  • SQLite error handling: I thought I was testing error paths, but I was only testing the happy path. Added proper error injection tests.
  • Factory method validation: My from_db_row factories had validation that was never triggered in tests. Added tests with invalid data.
  • Edge cases in services: Empty results, missing metadata, nonexistent directories. All cases my code handled but my tests never verified.
  • 404 handling in APIs: The code worked, but no test actually verified the 404 response.

The result after several iterations:

$ make test-mutation
Running mutation testing
726/726  🎉 724  ⏰ 0  🤔 0  🙁 2  🔇 0
30.02 mutations/second
$ make test-mutation-summary
Total mutants checked: 2
Killed (tests caught them): 0
Survived (gaps in coverage): 2

=== Files with most coverage gaps ===
    1 inventory.services.role_config_service
    1 inventory.db.gateway

From 15 surviving mutants down to 2. From 97.9% to 99.7% mutation score.

The coverage numbers told a similar story:

Coverage improvements:
- database_gateway.py: 92% → 100%
- teams_api.py: 85% → 100%
- role_config_service.py: 86% → 100%
- employees_api.py: 95% → 100%
- Overall: 93% → 99%
- Total tests: 203 passing

The Shift in Economics

Here's what struck me about this experience: the effort-to-value ratio had completely flipped.

Before AI, mutation testing was something you did if:

  • You had a critical system where bugs were expensive
  • You had a mature team with time to invest
  • You were willing to spend days or weeks on it
  • The application was large enough to justify the investment

For a 650-line internal tool? Forget about it. The math never worked out.

Now? The math is different. The AI did all the analysis work. I just had to review and approve. What used to take days took hours. And most of that time was me deciding priorities, not grinding through mutations.

The barrier to rigorous testing has dropped dramatically. And it doesn't matter if your codebase is 650 lines or 650,000. The cost per mutant is the same.

The Question That Remains

I've worked in teams that maintained sustainable codebases for years. I know what that forest looks like (to use Kent Beck's metaphor). I also know how much discipline, effort, and investment it took to stay there.

Now I'm seeing that same level of quality becoming accessible at a fraction of the cost. Tests that used to require days of manual work can be generated in hours. Mutation testing that was prohibitively expensive is now just another quick pass.

The technical barrier is gone.

So here's the question I'm left with: now that mutation testing costs almost nothing, will we actually use it? Will teams that never had the resources to invest in this level of testing quality start doing it?

Or will we find new excuses?

Because the old excuse ("we don't have time for that level of rigor") doesn't really work anymore. The time cost has collapsed. The tooling is there. The AI can do the heavy lifting.

What's left is just deciding to do it. And knowing that it's worth it.

What I Learned

Three concrete takeaways from this experience:

1. Line coverage lies, even in small codebases: 93% coverage looked great until mutation testing showed me the gaps. Those 15 surviving mutants were in critical error handling paths. After fixing them, I still had 99% line coverage. But now the tests actually verified what they claimed to test. If a 650-line application had 15 significant gaps, imagine larger systems.

2. AI makes rigor accessible for any project size: What used to be prohibitively expensive (manual mutation analysis) is now quick and almost frictionless. The economics have changed. From 15 survivors to 2 in just a few hours of work, most of it done by AI. This level of rigor is no longer reserved for critical systems. It's accessible for small internal tools too.

3. 99.7% is good enough: After the cleanup, I'm left with 2 surviving mutants out of 726. Could I hunt them down? Sure. Is it worth it? Probably not. They're edge cases in utility code that's already well-tested. The point isn't perfection. It's knowing where your gaps are and making informed decisions about them.

The real win isn't the numbers. It's the confidence. I now know exactly which 2 mutants survive and why. That's very different from having 93% coverage and hoping it's good enough.

This was a small project. If it had been bigger, I probably would have skipped mutation testing entirely (too expensive, too time-consuming). But now? Now I can't think of a good reason not to do it. Not when it costs almost nothing and reveals so much.

I used to think mutation testing was for perfectionists and critical systems only. Now I think it should be standard practice for any codebase you plan to maintain for more than a few months.

Not because it's perfect. But because it's no longer expensive.

And when the cost drops to almost zero, the excuses should too.

The AI Prompt That Worked

When facing surviving mutants, this single prompt did most of the heavy lifting:

"Run mutation testing with make test-mutation. For each surviving mutant, use make test-mutation-show MUTANT=name to see the details. Analyze what test case is missing and create tests to kill these mutants, following the existing test style. After adding tests, run make test-mutation again to verify they're killed. Focus on the top 5-10 most critical gaps first: business logic, error handling, and edge cases in services and repositories."

The key: let the AI drive the mutation analysis loop while you focus on reviewing and prioritizing.

Getting Started

If you want to try this:

  1. Add mutmut to your project (5 minutes with AI help)
  2. Create simple Makefile targets to make it accessible for everyone:
    • make test-mutation - Run the full suite
    • make test-mutation-summary - Get the overview
    • make test-mutation-report - See which mutants survived
    • make test-mutation-show MUTANT=name - Investigate specific cases
    • make test-mutation-clean - Reset when needed
  3. Run it weekly, not on every commit (mutation testing is slow)
  4. Use AI to triage survivors (ask it to analyze and prioritize)
  5. Review the top 5-10 gaps as a pair, decide which matter
  6. Start with one critical module, not the whole codebase

Making it easy to run is as important as setting it up. The barrier is gone. What's stopping you?

When NOT to chase 100%: Those final 2 surviving mutants? They're in logging and configuration defaults that are battle-tested in production. Perfect mutation score isn't the goal. Knowing your gaps is. Focus on business logic and error handling first. Skip trivial code.


About This Project

This application was developed using TDD and AI-assisted development with Claude code and Cursor (using Sonnet 4.5, GPT-5 codex, and Composer1). The mutation testing setup and gap analysis were done with Claude's help using mutmut.

Timeline: The entire mutation testing setup and gap analysis took about 2-3 hours with AI assistance.

Final stats: 649 statements, 208 tests, 99% line coverage, 726 mutants tested, 724 killed (99.7% mutation score).

Related Reading

Monday, November 03, 2025

When AI Makes Good Practices Almost Free

Since I started working with AI agents, I've had a feeling that was hard to explain. It wasn't so much that AI made work faster or easier, but something harder to pin down: the impression that good practices were much easier to apply and that most of the friction to introduce them had disappeared. That many things that used to require effort, planning, and discipline now happened almost frictionlessly.

That intuition had been haunting me for weeks, until this week, in just three or four days, two very concrete examples put it right in front of me.

The Small Go Application

This week, a colleague reached out to tell me that one of the applications I had implemented in Go didn't follow the team's architecture and testing conventions. They were absolutely right: I hadn't touched Go in years and, honestly, I didn't know the libraries we were using. So I did what I could, leaning heavily on AI to get a quick first version as a proof of concept to validate an idea.
The thing is, my colleague sent me a link to a Confluence page with documentation about architecture and testing, and also a link to another Go application I could use as a reference.

A few months ago, changing the entire architecture and testing libraries would have been at least a week of work. Probably more. But in this case, with AI, I had it completely solved in just two or three hours. Almost without realizing it.

I downloaded the reference application and asked the AI to read the Confluence documentation, analyze the reference application, and generate a transformation plan for my application. Then I just asked it to apply the plan, no adjustments needed, just small interactions to decide when to make commits or approve some operations. In just over two hours, and barely paying attention, I had the entire architecture changed to hexagonal and all the tests updated to use other libraries. It felt almost effortless.

It was a small app, maybe 2000 to 3000 lines of code and around 50 tests, but still, without AI, laziness would have won and I would have only done it if it had been absolutely essential.

The cost of keeping technical coherence across applications has dropped dramatically. What used to take serious effort now happens almost by itself.

The Testing That Stopped Hurting

A few days later, I encountered another similar situation, this time in Python. Something was nagging at me: some edge cases weren't well covered by the tests. I decided to use mutmut, a mutation testing library I'd tried years ago but usually skipped because the juice rarely seemed worth the squeeze.

This time I threw in the library, got it configured in minutes with AI's help, and then I basically went on autopilot: I simply generated the mutations and told the AI to go, one by one, analyzing the mutations and creating or modifying the necessary tests to cover those cases. This process required almost no effort from me. The AI was doing all the heavy lifting. I just prioritized a few cases and gave the tests a quick once-over, simply to check that they followed the style of the others.

In a couple of hours, the change in feeling was complete. Night and day. My confidence in the project's tests had shot up and the effort? Practically nothing.

The Intuition That Became Visible

These two examples, almost back-to-back, confirmed the intuition I had been carrying since I started working with AI agents: the economy of effort is changing. Radically.

Refactoring, keeping things coherent, writing solid tests, documenting decisions... None of that matters less now. What has changed is its cost. And when the cost drops to nearly zero, the excuses should vanish too.

If time and effort aren't the issue anymore, why do we keep falling into the same traps? Why do we keep piling on debt and complexity we don't need?

Perhaps the problem isn't technical. Perhaps the problem is that many teams have never really seen what sustainable code looks like, have never experienced it firsthand. They've lived in the desert so long they've forgotten what a forest looks like. Or maybe they never knew in the first place.

Beth Andres-Beck and Kent Beck use the forest and desert metaphor to talk about development practice. The forest has life, diversity, balance. The desert? Just survival and scarcity.

For years I've worked in the forest. I've lived it. I know it's possible, I know it works, and I know it's the right way to develop software. But I also know that building and maintaining that forest was an expensive discipline. Very expensive. It took mature teams, time, constant investment, and a company culture that actually supported it.

Now, with AI and modern agents, building that forest costs almost the same as staying in the desert. The barrier has dropped dramatically. The barrier isn't effort or time anymore. It's just deciding to do it and knowing how.

The question I'm left with is no longer whether it's possible to build sustainable software. I've known that for years. The question is: now that the cost has disappeared, will we actually seize this opportunity? Will we see more teams moving into that forest that used to be out of reach?

Related Content

Sunday, October 26, 2025

Keynote: Desapego radical en la era de la IA

Ayer sábado 25 de octubre tuve el honor de dar la keynote de apertura en Barcelona Software Crafters 2025 con la charla "Desapego radical en la era de la IA".

Para mí ha sido la charla más importante que he dado. Barcelona Software Crafters es la comunidad de software crafters que más respeto de las que conozco, la que más ha hecho por la profesión y la que más me ha enseñado todas las veces que he podido participar en sus conferencias u otras actividades. Así que dar la keynote de apertura para mí ha sido un grandísimo honor y una responsabilidad enorme, puesto que estamos en un momento de cambio brutal en nuestra profesión y entiendo que la comunidad software crafters y la comunidad agile (la de verdad) tenemos una gran oportunidad para reinventar esta profesión, adaptándonos, aprendiendo en comunidad y consiguiendo tener incluso más impacto.

🎥 El vídeo

Aquí puedes ver la charla completa. Gracias a Sirvidiendo Codigo por la grabación y por el increíble trabajo que hacen compartiendo contenido de calidad con la comunidad.



🔗 También puedes verlo directamente en YouTube: Desapego radical en la era de la IA

Y no te pierdas el canal de Sirvidiendo Codigo donde encontrarás muchas más charlas y contenido valioso sobre desarrollo de software.

📊 Las slides


🔗 Si prefieres verlas directamente en Google Slides, aquí tienes el enlace: Desapego radical en la era de la IA



¿De qué va la charla?


Estamos en un momento de cambio brutal en nuestra profesión. La IA nos ofrece una velocidad sobrehumana, pero también puede generar más complejidad si no cambiamos nuestra forma de trabajar.

La idea central es la necesidad de adoptar una mentalidad de "desapego radical" para poder explorar, aprender y adaptarnos. Algunos puntos clave:

- La IA exige reinventar el desarrollo de software con un product mindset fundamental.
- Debemos desapegarnos de lo conocido para explorar nuevas posibilidades, difuminando los roles.
- La IA amplifica el impacto de las buenas prácticas de ingeniería.
- La comunidad Agile es esencial para esta reinvención y el aprendizaje colaborativo.

Las slides tienen bastantes notas con ejemplos y reflexiones adicionales. Te recomiendo echarles un vistazo mientras esperamos al vídeo 😉

El feedback del público


El feedback fue bastante bueno y hablé con mucha gente sobre el tema durante el resto del día. Un par de personas me dijeron que les había hecho reflexionar y que iban a tomar acciones. En general, muy positivo. Estoy pendiente de recibir el feedback recogido por los organizadores en el formulario oficial.

Un abrazo enorme para el equipo de Barcelona Software Crafters por invitarme y por crear esta comunidad tan especial. El ambiente, las conversaciones y la energía fueron increíbles.

Si estuviste en la charla, ¡escríbeme! Me encantará seguir hablando, responder dudas o compartir ideas.

Si te lo perdiste, date una vuelta por las slides y compártelas con quien creas que le puede venir bien 😉

Nos vemos en la próxima. ¡A seguir aprendiendo en comunidad!

Saturday, October 11, 2025

Good talks/podcasts (Oct I) / AI & AI - Augmented Coding Edition!

These are the best podcasts/talks I've seen/listened to recently: Reminder: All of these talks are interesting, even just listening to them.

You can now explore all recommended talks and podcasts interactively on our new site: The new site allows you to:
  • 🏷️ Browse talks by topic
  • 👤 Filter by speaker
  • 🎤 Search by conference
  • 📅 Navigate by year
Feedback Welcome!
Your feedback and suggestions are highly appreciated to help improve the site and content. Feel free to contribute or share your thoughts!
Related: