Saturday, June 28, 2025

Charla: El coste oculto de la complejidad: reconstruyendo una Data Platform

Ayer tuve el placer de dar esta charla en la Pamplona Software Crafters 2025, organizada por 540deg. Fue una experiencia increíble, en dos días (26 y 27 de junio) llenos de energía, aprendizaje y buen ambiente en el Castillo de Gorraiz.

Me lo pasé genial. Vuelvo a casa con la cabeza llena de ideas, ganas de aplicar nuevos enfoques… y, sobre todo, con la alegría de haber charlado con viejos amigos y conocer a gente nueva de nuestra comunidad.

Aunque por temas logísticos no pude disfrutar de toda la conferencia, lo que viví fue 100 % espectacular.

🔗 Si prefieres verla directamente en Google Slides, aquí tienes el enlace: El coste oculto de la complejidad

¿De qué va la charla?

En las plataformas de datos, una complejidad invisible puede ser un gran lastre: equipos atrapados apagando fuegos, operaciones costosas y poca capacidad de innovar. En esta charla cuento cómo, a través de uno caso real, conseguimos hacer visible ese “coste oculto” y reducirlo:

  • Equipos atrapados en decisiones técnicas heredadas.
  • Falta de feedback rápido.
  • Cultura enfocada al control más que al aprendizaje.

Aplicamos principios Lean y XP al mundo de los datos, incluso con sus limitaciones. El resultado: una plataforma más sencilla, resistente y alineada con lo que realmente importa al negocio. Todo ello explicado con sus retos y aciertos.

Lo que noté en el público

Tras terminar me vinieron un montón de felicitaciones. Me dijeron que apreciaron lo honesto de la charla: conté las dificultades reales y los momentos complejos, sin edulcorarlo. Fue genial sentir esa conexión y saber que esa transparencia caló hondo.

Un abrazo enorme para 540deg y al equipo de Pamplona Software Crafters por invitarme y montar una experiencia tan chula. Y también a los organizadores del open‑space: el ambiente de colaboración y comunidad fue de 10.

Si estuviste en la charla, ¡escríbeme! Me encantará seguir hablando, responder dudas o compartir ideas.

Si te lo perdiste, date una vuelta por los slides y compártelos con quien creas que le puede venir bien 😉

Hasta la próxima edición… ¡nos vemos!

Friday, June 20, 2025

YAGNI and the Value of Learning: An Additional Premise

For years, I’ve been applying—almost without realizing it—an extension of the YAGNI principle that I want to share. It’s become part of how we work as a team, a “gut feeling” we’ve refined through experience, and I believe it’s worth making explicit.

Beyond Traditional YAGNI

YAGNI (You Aren't Gonna Need It) is a fundamental principle reminding us not to implement features just because we think we might need them in the future. It's a powerful defense against overengineering and unnecessary complexity.

But there are situations where the premise shifts. Sometimes we know we’re going to need something. It’s not speculation—it’s a reasonable certainty based on product context, business needs, or the natural evolution of the system.

In those cases, our response is not to implement the full solution just because “we know we’ll need it.” Instead, we ask ourselves:

Is there a smaller version of this that lets us learn earlier?

The Value of Learning as a Decision Criterion

The key is to evaluate the learning value of each intermediate step. Not every small step is worth taking—only those that provide meaningful insight into:

  • Actual user behavior
  • The technical feasibility of our approach
  • The validity of our assumptions about the problem
  • The real impact on the metrics we care about

When the cost of that small step is lower than the value of the learning it brings, it’s almost always worth it. This is a practical application of Lean Startup principles to technical development.

Nonlinear Risk: Why Small Steps Matter

There’s another factor reinforcing this approach: risk doesn’t grow linearly with the size of the change. A change that’s twice as big doesn’t carry twice the risk—it carries exponentially more risk.

Small steps allow us to:

  • Catch issues while they’re still manageable and easy to fix
  • Validate assumptions before investing more resources
  • Maintain the ability to pivot without major cost (optionality)
  • Generate more frequent and higher-quality feedback

How We Apply This in Practice

We’re quite radical about this approach. We aim to get product changes to users within 1–1.5 days, and within that cycle, we ship even smaller technical changes to production. These micro-changes give us valuable information about the “how” while we continue refining the “what.”

Our mental process is almost instinctive: whenever a need arises, we consider multiple options—some that others might call “hacky”—and always choose the smallest possible step, no matter how strange it may seem.

We use techniques like Gojko Adzic’s hamburger method to slice functionality, but we go even further. We constantly ask ourselves:

  • “Can we start with a hardcoded version to validate the UX?”
  • “What if we begin with a manually uploaded CSV before building an automated integration?”
  • “Can we simulate this feature with manual config while we learn the real flow?”
  • “What if we do it just for one user or a specific case first?”

This isn’t about being naive about future needs. It’s about being smart about how we get there. Each micro-step gives us signals about whether we’re going in the right direction, both technically and functionally. And when something doesn’t work as expected, the cost to pivot is minimal.

This obsession with the smallest possible step not only reduces risk, it also accelerates real learning about the problem we’re solving and the behavior of the solution we’re implementing.

Connection with Other Premises

This way of working naturally aligns with other guiding principles in our approach:

  • Postpone decisions: Small steps allow us to delay irreversible choices until we have more information
  • Small safe steps: We work incrementally to reduce risk and increase learning
  • Software as a means: We focus on impact, not on building the most complete solution upfront
  • Optimize for feedback: We prioritize fast learning over perfect implementation, because we know we don’t have all the answers—we need to discover them

A Premise in Evolution

Like all the premises we use, this isn’t universal or applicable in every context. But in software product development, where uncertainty is high and the cost of mistakes can be significant, it has proven extremely valuable.

It’s part of our default way of working: we always look for the smallest step that lets us learn something useful before committing to the full step. And when that learning has value, it’s almost always worth the detour.

Have you experienced something similar in your work? How do you evaluate the trade-off between implementing something fully and taking intermediate steps to learn?

References

Sunday, June 15, 2025

Built a Custom GPT to Help Teams Ship Smaller, Safer, Faster

Most teams build too much, too early, with too much anxiety. They optimize for perfect architecture before users, comprehensive features before learning, elaborate processes before understanding real constraints.

The result? Endless discussions, delayed releases, building the wrong thing.

So I built: 👉 eferro Lean – your no-BS delivery copilot

A Custom GPT that works with any product development artifact—PRDs, tickets, code reviews, roadmaps, architecture docs—and asks the uncomfortable, helpful questions:
  • "What's the smallest shippable version?"
  • "Do we actually need this complexity right now?"
  • "What if we postponed this decision?"
  • "How can we make this change in smaller, safer steps?"
Perfect for anyone building products: developers, PMs, designers, architects, team leads.


Use it to:
  • Slice big ideas into vertical experiments and safe technical steps
  • Plan parallel changes, expand-and-contract migrations, or branch-by-abstraction
  • Challenge bloated PRDs or over-engineered solutions
  • Turn risky releases into incremental, reversible deployments
It challenges assumptions, slices features into experiments, and guides you toward the last responsible moment for decisions. Questions that create momentum instead of paralysis.

Software is a learning exercise. Every feature is an experiment. The faster we test hypotheses safely, the faster we learn what creates value.

No tracking, no upsell, no agenda. My only intention is to share what I've learned and keep learning from the community.

If it helps one team ship better—with less stress, more learning—that's enough.

Thursday, June 12, 2025

Good talks/podcasts (Jun)

These are the best podcasts/talks I've seen/listened to recently:
  • Data - The Land DevOps Forgot 🔗 talk notes (Michael T. Nygard) [Architecture, Data Engineering, Devops, Platform] [Duration: 00:47] (⭐⭐⭐⭐⭐) This talk offers a critical look at why the analytical data world is "the land DevOps forgot," and presents Data Mesh as a paradigm shift to enable decentralized, autonomous data operations, emphasizing that successful adoption requires significant organizational and cultural change.
  • Jeff Bezos explains one-way door decisions and two-way door decisions 🔗 talk notes (Jeff Bezos) [Management, Mental models] [Duration: 00:03] (⭐⭐⭐⭐⭐) Jeff Bezos explains his mental model of two-way door (reversible) and one-way door (irreversible) decisions, highlighting how to apply different decision-making processes for each in organizations.
  • TDD, AI agents and coding with Kent Beck 🔗 talk notes (Kent Beck, Gergely Orosz) [AI, XP, tdd] [Duration: 01:15] Industry legend Kent Beck, creator of XP and TDD, shares insights on the evolution of Agile, Extreme Programming, and Test-Driven Development, alongside his current experience of "most fun ever" coding with AI agents.
Reminder: All of these talks are interesting, even just listening to them.

You can now explore all recommended talks and podcasts interactively on our new site: The new site allows you to:
  • 🏷️ Browse talks by topic
  • 👤 Filter by speaker
  • 🎤 Search by conference
  • 📅 Navigate by year
Feedback Welcome!
Your feedback and suggestions are highly appreciated to help improve the site and content. Feel free to contribute or share your thoughts!
Related:

Sunday, June 08, 2025

Lean Software Development: Overcoming resistance and creating conditions for quality

Fifth article on quality in Lean Software Development. In previous posts, we talked about how to build with quality through mistakes, technical design, collaboration, and visibility. Now we address a key topic: why many organizations still don't work this way, and what we can do to change that.

In the world of software development, there is a persistent myth: that quality and speed are opposing forces, and that one must be sacrificed to obtain the other. However, the reality, as demonstrated by the DORA reports and the experience of high-performing teams, is that quality is the most direct and sustainable path to the highest possible speed.

There is a fundamental paradox: the more we obsess over immediate speed at the expense of quality, the slower we become. Teams that accumulate technical debt, unresolved bugs, or hard-to-maintain code make each new feature exponentially more expensive. What seemed like a "pragmatic" decision becomes a burden that slows down the entire system.

True pragmatism aligns with Lean principles: postponing decisions until sufficient information is available, applying YAGNI (You Aren't Gonna Need It), keeping design simple, and constantly iterating to have the simplest version of the system that meets current needs. That is being truly pragmatic.

It’s important to understand that in the age we live in—of continuous change and software adaptation—when we talk about the “medium term” we actually mean a few weeks. We are not talking about months or years to see the benefits of quality. The effects of working with quality are noticed very quickly, and that supposed short-term trade-off only makes sense for throwaway software.

In Lean thinking, the way to have more impact is to minimize waste, with lack of quality being one of the main wastes in software. So the winning combination in software development is to maximize impact, minimize the amount of software generated, and do it with quality in the process. The approach is not to do things worse or faster, but to be smart and disciplined to achieve more impact with less, and with quality. This is the true way to go fast, achieve maximum impact, and be a sustainable high-performing team.

Common reasons for not working this way (frequent resistances)

Pressure for short-term speed

"We don't have time to write tests," "it has to be delivered now." This is the classic one. However, as we've already seen, well-integrated tests in the development flow allow faster progress at lower cost in the medium term.

In environments where immediate output is valued, investing in quality at the start may seem slower, but it prevents a greater slowdown even in the short term. We're not talking about benefits that take months to arrive—in a matter of weeks you can notice the difference when technical debt doesn't accumulate and waste is kept under control. Lean practices are often misinterpreted as an initial brake, but their true value becomes clear when the system starts to fail and the real cost of not having invested in quality becomes evident.

Misalignment between business and technology

If the business only measures visible deliveries (features) and does not understand the value of refactoring, tests, or simple design, perverse incentives arise that push to avoid everything that isn't “visible.”

Here it's necessary to align incentives, showing with data that investing in quality generates higher returns. Moreover, the waste of building unnecessary or misunderstood features skyrockets when this alignment is missing. Let’s not kid ourselves: the fundamental waste in product software development is implementing what’s not needed, and maintaining it for the lifetime of the product. We already know that the basal cost doesn't apply only to features that are used.

Lack of training or experience

For many people, this way of working is new. They haven’t seen environments with trunk-based development, TDD, or real automation. If they haven’t experienced the benefits, it’s normal for them to distrust or underestimate them. Some of these practices require a significant mindset shift and specific technical skills that take time to develop. Investment in training and mentoring is key to overcoming this initial barrier and building the confidence needed in these methods.

Fear of change

Fear of the unknown is a natural human response. Many teams feel comfortable with their current processes, even if they are inefficient. Changing established routines generates uncertainty and resistance. This fear can manifest as skepticism ("this won’t work here") or even passive sabotage. The transition requires effective leadership, clear communication of expected benefits, and the creation of a safe environment where experimenting with new methods is valued and supported.

Lack of structural quality

Some teams want to work with quality, but they already have a system full of debt, without tests, without confidence. Changing it requires an investment that the organization is often unwilling to make. Here improvement must be incremental, with visible wins: reducing deployment time by 10%, fixing the 3 most critical bugs, etc. Establishing “clean zones” in the code and gradually expanding them can be an effective strategy to regain ground without needing a full rewrite.

Organizational inertia and rigid structures

If teams lack autonomy, if decisions are made top-down without technical feedback, or if release, QA, or security processes are outside the team, it’s hard to apply jidoka or react quickly to problems.

The system inhibits quality, and the waste of time and resources increases exponentially while problems persist.

Culture of blame and punishment

If the organization doesn’t tolerate mistakes, if it looks for culprits instead of causes, or if incidents generate fear instead of learning, errors are hidden instead of made visible. And without visibility, there is no improvement, nor can waste be reduced.

Fear paralyzes innovation, delays problem identification, and hides waste at all levels.


Even if it sounds exaggerated, many organizations face this dilemma when they realize that their way of working is no longer sustainable. Improving requires effort, but not improving has inevitable consequences.

Improve or die meme


Create the conditions to build with quality

Working with quality, as we've seen throughout this series, does not depend only on tools or individual talent. It is a direct consequence of the environment (system) we build. Quality does not arise spontaneously: it needs space, alignment, and a culture that values it.

From Lean Software Development, we start from one premise: people want to do good work. But if incentives, habits, and culture don’t support that, even teams with the best intentions will fall into practices that sacrifice quality in favor of urgency, volume, or the appearance of productivity. And this inevitably leads to generating a lot of waste.

“A bad system will beat a good person every time.”
—W. Edwards Deming

As product development leaders, we have a clear responsibility: create the right conditions so that quality is not only possible, but inevitable. This involves intervening in three key dimensions: incentives, work systems, and culture.



Quality doesn’t improve by acting only on the visible. As Donella Meadows well summarized, there are many levels from which to intervene in a system. The deeper the intervention point (mindset, culture, structure), the greater its impact. This framework reminds us that if we want sustainable quality, it's not enough to tweak metrics: we have to transform how we think and how we work.

Places to Intervene in a System by Donella Meadows
Places to Intervene in a System by Donella Meadows

Redefine success

Instead of celebrating only the number of features delivered or apparent speed, let's focus on real impact, system sustainability, and the team’s ability to adapt confidently.

Quality is not about delivering more, but about delivering better: with less risk, maintaining a sustainable pace, continuously learning, and better anticipating changes.

Make space for learning and continuous improvement

One of the most common mistakes is to think that Kaizen time is dispensable. But setting aside time to refactor, automate, review processes, or simplify is not a luxury: it’s part of the team’s job and an investment in the system’s health.

To make it possible, we need to introduce intentional slack: planned space to observe, learn, and improve. Without that margin, all the time is spent delivering, and there’s no energy or focus left for Kaizen.

Continuous improvement requires time, attention, and a sustainable rhythm. It's what allows consistent waste reduction.

Take care of team culture

Psychological safety is key. If there is fear of making mistakes or pointing out problems, there will be no jidoka, kaizen, or visibility. Only in an environment where it’s safe to question, explore, and learn without punishment can we detect errors in time and improve together, reducing the waste they generate.

We must also avoid encouraging heroic work: when good outcomes depend solely on someone’s extraordinary effort, it's a sign that the system is failing.

Instead of heroes, we need teams that work sustainably, with processes that ensure continuous and predictable quality. Heroic work is often a chronic waste generator.

Moreover, real autonomy must be granted: choosing technologies, designing testing processes, having a voice in planning, etc. A team with no control over its technical environment, workflow, or how it validates what it builds will hardly be able to guarantee quality.

Autonomy, combined with shared responsibility, is one of the strongest pillars of quality in Lean.

Finally, incentives must be aligned with quality. Recognize and make visible the work that keeps everything flowing: not just new features, but also when technical debt is reduced, the testing process is improved, a production incident is prevented, or a critical system component is simplified.

All of that is also delivered value. And it’s often the most enduring.

How to make quality inevitable: leadership in practice

Making quality possible is not about demanding more effort from teams. It's about changing the system so that working with quality becomes the most natural, simplest, and fastest path. Over the years, I’ve tried to systematize this approach with very concrete decisions. Here are some of them:

  • Reserve space for learning. Actively decide what portion of time is invested in learning. Sometimes it’s training, other times it’s simply asking: “What have you learned? What can you share?”
  • Turn mistakes into collective learning. Introduce blameless postmortems. Lead the first ones, define the process, normalize that errors are not blame, but opportunities for improvement.
  • Lead by example. Apply TDD, evolutionary design, pairing. Be the first to document and act on incidents. Don’t demand what you don’t practice.
  • Introduce Technical Coaching. Learn alongside those who already master practices like TDD or Pair Programming. If possible, bring in experts with real experience.
  • Change the hiring process. Evaluate how people work, not just what they know. Introduce TDD, pairing, collaborative design as part of the process.
  • Reward and make structural improvements visible. Explicitly value what improves quality: debt reduction, better test strategies, simplifications, etc.

This type of leadership, which seeks to change the system to make quality inevitable, is not an isolated intuition. Studies such as those from the DORA report show that transformational leadership, together with Lean practices, has a clear impact on team performance, well-being, and business results.

Transformational Leadership Impact Model by DORA
Transformational Leadership Impact Model by DORA / Accelerate


Lead to make quality inevitable

Building with quality is not just a matter of technical practices: it is, above all, a matter of leadership. Our role as leaders is not to demand quality as if it were an optional extra, but to understand that it is the foundation for sustainable speed, for reducing waste, and for maximizing real impact.

Quality is not a goal or an option: it is the operating system upon which everything else relies. If that system fails, any attempt to move fast leads directly to collapse.

Our job as leaders is to create the conditions where quality does not depend on individual will, but becomes the easiest, fastest, and most natural path. Where building with quality is not a heroic act, but the inevitable one.

Saturday, May 24, 2025

Lean Software Development: Quality through Collaboration and Visibility

Fourth part of the series on quality in Lean Software Development. In the previous post, we discussed how internal and technical quality is key to sustaining external quality and accelerating development.

Quality through collaboration and shared design

An essential part of quality, often underestimated, doesn’t lie in the code or the tools, but in how we work together. In Lean Software Development, errors are not just seen as technical failures, but also as failures in understanding. Many of the defects that reach production aren’t due to poorly written code, but because the code doesn’t solve the right problem, or doesn’t do so in the right way.

That’s why one of the key mechanisms to build with quality is close and continuous collaboration among all the people involved—those who design, develop, test, or speak with users. The earlier we share an understanding of the problem and align expectations, the fewer errors will be introduced into the system. Once again, quality from the start.

Practices like pair programming, ensemble work, using concrete examples in conversations with the business, or co-designing solutions are mechanisms that allow us to detect errors—technical and conceptual—as soon as they appear. In doing so, they enable early intervention aligned with the spirit of jidoka. And we do this naturally, because there are many eyes on the problem, many opportunities to surface misunderstandings.

This collaborative approach also reinforces kaizen, as it facilitates continuous improvement. Ideas are challenged, explained, and refined. The system evolves more coherently because it doesn’t rely on isolated individual decisions, but on shared and distributed knowledge.

Furthermore, collaboration reduces waste: we build what is actually needed, avoid incorrect assumptions, and minimize rework. Solutions tend to be simpler because they’ve been discussed and refined from different perspectives.

Ultimately, if we understand building with quality as avoiding defects, reducing waste, and maintaining a healthy system we can evolve with confidence, then collaboration is not optional. It is one of the most powerful ways to prevent errors before they become code.

The value of making quality (or its absence) visible

One of Lean’s fundamental principles is to make problems visible. If we can’t see a problem, we can’t improve it. And if quality isn’t visible to the team, to decision-makers, or to those supporting the product, then it’s unlikely to become a priority.

That’s why, in Lean Software Development, it’s essential to make the real state of quality visible at all times. Not only through technical metrics, but also with mechanisms that make it obvious when something is failing, when we’re accumulating waste, or when we’re risking system stability.

This connects directly to jidoka: any signal of a problem, no matter how small, should stop the flow or at least get our attention. Whether it's a failing test, a monitoring alert, a drop in coverage, or an increase in average bug resolution time—everything should turn on a warning light. The goal is that nothing goes unnoticed so we can act in time.

It’s also a constant reinforcement of kaizen: what isn't seen can’t be improved. Making quality—internal and external—visible allows us to make informed decisions about where to focus our improvement efforts. If we notice production defects always come from a certain part of the system, we probably need to strengthen our testing there. If the pace of change slows down, maybe complexity is growing out of control.

There are many ways to make quality visible: from continuous integration dashboards to production alerts, from physical boards with open bugs to regular incident review meetings. The important thing isn’t the tool, but the habit of looking honestly at the state of the system and the process.

Making quality (or its absence) visible also has a cultural impact: it reinforces shared responsibility. If everyone sees there’s a quality issue, it’s easier for everyone to participate in solving it. The invisibility of decay is eliminated, resignation is avoided, and an environment is fostered where problems are tackled as soon as they appear.

Because ultimately, building with quality also means building with transparency.

Quality as an organizational habit

Building with quality isn’t a phase of the process, a task assigned to a specific person, or something you “add at the end.” It’s a way of working, a habit cultivated daily and embedded in everything we do: how we design, how we write code, how we collaborate, how we solve problems, and how we learn.

In Lean Software Development, quality is non-negotiable because it is the foundation of everything else. Without quality, flow breaks down, learning slows, the cost of change rises, and trust disappears. That’s why quality isn’t pursued for technical idealism, but because it’s the most effective way to deliver value continuously and sustainably.

The principles of jidoka, poka-yoke, and kaizen are present in every practice we've mentioned: in automated tests that stop the flow upon failure, in processes that prevent human errors, in the constant improvement of our tools and processes, and in how we treat incidents as learning opportunities.

But none of this works unless it becomes part of the team’s culture. Quality doesn’t emerge by chance or good intentions—it arises when there are concrete practices that support it, when there are shared agreements on how to work, and when the environment reinforces these behaviors over and over. In other words, when there are habits.

And like any habit, it must be trained. It starts with small actions: writing a test before fixing a bug, stopping development to investigate an error, reviewing the design with someone else before implementing. Over time, these actions become the natural way of working. The team gains confidence, the system remains healthy, and problems are addressed quickly and calmly.

In the next and final post, we’ll explore why many organizations still don’t work with quality—even knowing its benefits. We’ll look at the most common resistances and how we can create an environment where quality doesn’t depend on heroic efforts, but becomes a natural consequence of the system.