Why Your Ideas Die in Planning Meetings
The silence that kills good ideas
One morning, I sat in yet another meeting where we just spent two weeks backfilling a table then we found it was riddled with issues with the data. Even if we resolve the issue, it would then be another 2 weeks to backfill the data, there has to be a better way.
“So, what do we think? Give me your best ideas for tackling this.”
Silence.
Two senior engineers stared at their laptops. One analyst started typing what looked suspiciously like Microsoft Teams messages. I knew for a fact that at least two people in that room had brilliant, half-formed thoughts about the problem. I’d heard them discussing it the day before.
But in this moment? Nothing.
This pattern repeats in data teams everywhere, and it’s not because people lack ideas. It’s because the way we ask for ideas practically guarantees we’ll never hear the good ones. The research on collaborative ideation reveals something that should fundamentally change how leaders run their teams: the difference between “give me your best ideas” and “give me all your ideas” isn’t just semantic—it’s the difference between innovation and stagnation.
What psychological safety actually means for data teams
Amy Edmondson’s research at Harvard Business School established that psychological safety—“a shared belief that the team is safe for interpersonal risk taking”—is the essential precondition for teams to function effectively. But here’s what made her findings particularly striking: in a study of 51 work teams, she discovered that team confidence alone didn’t predict learning behaviors when you controlled for safety.
People need to feel that speaking up won’t damage their standing before their capability matters.
This has profound implications.
Think about your last sprint planning meeting. Who did most of the talking? If you’re like most organizations, it was probably the most senior person in the room—the staff engineer, the principal data scientist, or the engineering manager. Everyone else contributed sporadically, if at all.
But, that senior person doesn’t have all the answers. The analyst who joined three months ago might have spotted a pattern in the data that the ten-year veteran missed because they’re looking at it with fresh eyes. The data engineer who’s quiet in meetings might have implemented the exact solution we need at their previous company. The junior scientist might have read a recent paper that completely reframes the problem.
You’re not hearing any of this because they’re doing what Edmondson calls the “interpersonal risk calculus”—silently weighing whether the perceived benefits of speaking up outweigh the risks of looking incompetent, being rejected, or appearing ignorant.
When safety is low, self-protective filtering dominates. Only “safe,” conventional ideas get shared. The unconventional, half-formed, potentially breakthrough thoughts stay locked in people’s heads.
Kerry Patterson’s research on crucial conversations provides a useful framework for understanding what’s happening here. He calls it the Pool of Shared Meaning—the collective understanding that emerges when people openly share their views. When everyone contributes their complete thinking, the pool is full and decisions are stronger. When safety is low and people self-censor, the pool stays shallow.
In data teams, this shows up constantly. Someone notices that our data quality problems stem from misaligned incentives between product and engineering, but sharing that observation feels risky—it implies criticism of another team. Someone else has experience with a similar architecture problem at their previous company, but they’re only three months in and worry about seeming presumptuous. A third person has an unconventional idea that might actually work, but it would require explaining a complex technical concept and they’re not sure they can articulate it clearly under pressure.
Each person performs their own silent risk calculus and decides to withhold. The pool stays shallow. The meeting ends with only the safest, most obvious ideas on the table. And the best solution—which probably required combining insights from all three of those people—never emerges.
I’ve seen this play out. An engineer might notice that the root cause of data quality issues isn’t technical at all—it’s that product managers keep requesting “quick one-off reports” that bypass all our governance processes. But suggesting that we need to have a difficult conversation with product leadership? That requires tremendous psychological safety. Easier to just say “we should add more validation checks to the pipeline” and move on.
Why “give me your best ideas” is the worst possible question
Here’s something that surprised me when I first encountered the research: asking people for their “best” ideas actually produces worse outcomes than asking for “all” ideas.
It seems counterintuitive. Surely asking for best ideas filters out the noise and gets straight to quality, right?
Wrong.
When you ask for “best ideas,” you trigger what psychologists call evaluation apprehension. People must judge their own ideas before sharing them, which activates fear of judgment. In studies, participants in high-apprehension conditions generated significantly fewer ideas (about 21 ideas on average) compared to low-apprehension conditions (about 33 ideas)—and explored fewer idea categories entirely.
The request “give me your best ideas” does several harmful things simultaneously:
First, it forces people to evaluate while they’re generating. This is cognitively expensive. Part of their mental resources are devoted to thinking up solutions, while another part is busy playing judge and jury on those solutions before they’re even fully formed.
Second, it signals that quality judgment happens before sharing, which means people assume incomplete or uncertain ideas aren’t welcome. The very thoughts that might serve as stepping stones to breakthrough insights get filtered out before they reach the group.
Third, it creates a high-stakes environment. If you’re only sharing your “best” ideas, you’re putting your judgment on display. What if the group disagrees? What if someone senior thinks your “best” idea is actually terrible?
There’s a psychological mechanism at work here that Daniel Pink calls the Sawyer Effect, from Mark Twain’s observation that “work consists of whatever a body is obliged to do, and play consists of whatever a body is not obliged to do.” Pink’s research on motivation shows that external evaluation can transform inherently interesting tasks into drudgery—turning play into work.
When you ask for “best ideas,” you’re asking people to apply external evaluation to their own thinking. What was potentially an enjoyable creative exercise—“how might we solve this interesting problem?"—becomes a performance task with judgment attached. The intrinsic motivation to explore solutions gets crowded out by the extrinsic pressure to look competent.
This is particularly damaging for the kind of creative problem-solving data teams do daily. We’re not assembly line workers performing routine tasks—we’re solving novel problems that require genuine creativity. The Federal Reserve actually studied this: they found that offering higher rewards for cognitive tasks didn’t improve performance. In fact, those offered the highest bonuses performed the worst. When stakes are high and judgment is involved, external pressure actively harms the kind of thinking we need.
I seen this play out various times. The team lead asked everyone to “bring your best approach for data migration from our monolithic data warehouse to a lake house architecture.” People showed up to the next meeting with carefully prepared ideas, each advocating for their single preferred solution. Majority of them are the same, but it doesn’t look at the problem differently.
The meeting turned into positional warfare. People defended their chosen approach, pointed out flaws in alternatives, and nobody budged. We spent three weeks in analysis paralysis before a decision to end the stalemate occured.
Contrast that with, instead of asking for “best approaches,” I asked the team to generate every possible way we could solve the problem, no matter how impractical or half-baked. “Give me all your ideas—the good, the bad, the weird, the incomplete.”
We filled three whiteboards. Someone suggested new tech. Someone proposed a datamesh approach. One engineer threw out “what if we just stayed with the data warehouse and optimized it better?” Another suggested a serverless approach using AWS Glue and Athena. Someone jokingly said “rewrite everything with heavy partitioning.”
That last “joke” actually sparked a serious conversation. Turned out our issues weren’t really about warehouse versus lake house at all—they were about query patterns and data organization. We ended up with a solution that kept our existing warehouse but completely restructured how we modeled data.
We never would have arrived at that solution if people had been filtering for their “best” ideas before speaking.
Quantity unlocks quality through cognitive mechanisms you can actually use
The counterintuitive finding that quantity breeds quality is now well-established in cognitive science, and it has immediate practical implications for how data teams approach problem-solving.
The mechanism works like this: when you start generating ideas, your brain naturally accesses the most available associations first—which are typically the most obvious and conventional solutions. The really original ideas live in more remote semantic connections that only get accessed after you’ve exhausted the obvious options.
Research by Nijstad and Stroebe explains this through their Search for Ideas in Associative Memory (SIAM) model. Idea generation happens in two stages: first, a search cue activates a knowledge structure in long-term memory; then, features of that structure combine to produce specific ideas. The crucial insight is that initial ideas access what’s most readily available. Later ideas, after depleting these obvious options, require exploring more unusual mental paths.
This produces what researchers call the serial order effect: as people generate more ideas, later ideas become increasingly original.
I use this principle deliberately now when working with teams. Here’s a practical example:
When we were trying to improve our data backfill mechanism, I asked the team to come up with 15 different approaches for how we could optimise or improve how we backfill tables with data. The idea was to short fuse and increase the ability to understand issues, before they reached production. Not “a few good approaches”—specifically 15.
The first five were predictable: create a cyclic job that runs over each date, another was, load a week of data then run tests.
Ideas six through ten got more interesting: create a insert only table to be able to batch load data hisotircal data in one go, create synthetic test data that exercises a longer time span of data.
Ideas eleven through fifteen got genuinely creative: To build on the above, loading data as insert only, but then compressing timelines, and virtually end dating the records.
We ended up implementing a combination where we rebuilt the history doing that insert spine of data, and then compressing the timeline (where previous and current records row hash is identical)
What was amazing here is, not only did running this script take less time, 1hr to execute. However the compute cost of running 1 query versus throusands of queries cost alot less in compute as well.
This works because high quantity targets provide psychological permission. When the goal is 15 ideas, it legitimizes including uncertain, incomplete, or seemingly foolish contributions. Nobody expects all 15 to be brilliant—which paradoxically enables some of them to be exactly that.
The IKEA effect explains why imposed solutions fail
Here’s something you already know intuitively: people are far more committed to solutions they helped create than solutions handed to them. What you might not know is just how powerful this effect is, or the psychological mechanisms that drive it.
Research calls this the IKEA effect—consumers place disproportionately high value on products they partially created, even when those products are objectively inferior to professionally made alternatives. In studies, people valued self-assembled furniture about 63% higher than identical pre-assembled items.
The mechanism is what psychologists call psychological ownership. When you invest effort in creating something, it becomes part of your extended self. The idea is no longer “the company’s data quality initiative”—it’s “our data quality initiative that we designed together.”
This explains patterns I see repeatedly in organizations:
A new senior leaders comes in and mandates that all teams must adopt a particular framework. Teams comply minimally, work around it when possible, and resist at every opportunity. Two years later, the senior leaders leaves, and the framework dies immediately.
Contrast that with a different organization where the data team spent three months collaboratively designing their own framework. They held working sessions. They debated principles. They argued about where to draw boundaries between centralized control and team autonomy. The resulting framework wasn’t objectively better than the mandated one—arguably it was less sophisticated. But adoption was near-universal because people felt ownership of it.
People don’t just accept ideas they helped create—they champion them.
What actually works: practical patterns for data teams
After reviewing all this research and reflecting on what I’ve observed across multiple data teams, certain patterns emerge consistently.
Create explicit psychological safety through leader behavior. This isn’t about being nice or avoiding difficult conversations. It’s about modeling specific behaviors: openly acknowledging your own mistakes, asking for team input in group settings and responding positively even to challenging questions, treating failures as learning opportunities rather than blame opportunities.
In practice, this means: when a data pipeline fails, your first question is “what can we learn?” not “who’s responsible?” When someone junior proposes an idea that won’t work, you explore why they thought it might before explaining the problems. When you make a decision that turns out wrong, you say so explicitly rather than quietly changing direction.
Structure ideation sessions to separate generation from evaluation. Set high quantity targets, ask for “all ideas” not “best ideas,” use anonymous contribution methods when appropriate, actively suppress evaluation during the generation phase—even subtle head shakes or skeptical looks kill psychological safety.
Manage hierarchy flexibly across phases. Here’s something that took me too long to understand: power dynamics exist in every meeting whether we acknowledge them or not. Priya Parker’s research on effective gatherings makes this point forcefully—ignoring power dynamics doesn’t make them disappear, it just means they operate invisibly and unmanaged.
In data teams, hierarchy comes from multiple sources. There’s the obvious organizational hierarchy—staff engineers outrank senior engineers who outrank mid-level engineers. But there’s also expertise hierarchy (the person who built the system has more authority than the person who joined yesterday), social capital (some people have more trust and relationships), and communication skill (articulate people get more airtime regardless of title).
When we pretend everyone’s voice carries equal weight, we’re not creating equality—we’re just refusing to acknowledge reality. The senior architect’s skeptical look still kills ideas. The staff engineer’s casual “I don’t think that’ll work” still shuts down exploration. The difference is whether we’re managing these dynamics intentionally or letting them operate invisibly.
Managing hierarchy flexibly means acknowledging power exists and deliberately adjusting it for different phases of work. During idea generation, we want to minimize hierarchy’s influence. During evaluation and decision-making, we can leverage senior judgment appropriately. But we have to be explicit about the shift.
Alison Wood Brooks’ research on conversation dynamics identifies specific behaviors that high-status members must practice to create genuine safety in group settings. The problem isn’t just that senior people dominate airtime—though they do—it’s that high status actually inhibits perspective-taking. When you’re senior in a group, you naturally become less attentive to others’ viewpoints and less careful with your language.
This means senior engineers and managers need to work actively against their default behaviors. Brooks recommends:
Acknowledge junior members by name before they speak and after they contribute. “Sarah, I’m curious what you think about this” and “That point Sarah made earlier about partitioning strategies is worth exploring” both signal that contributions from less senior people matter.
Be vulnerable about your own failures early in the discussion. When a principal engineer says “I tried this approach last year and completely misjudged the performance implications,” it reduces everyone else’s fear of proposing imperfect ideas. Vulnerability from the top of the hierarchy creates permission for uncertainty throughout the group.
Use genuinely inclusive eye contact. Don’t just look at other senior people or the most vocal contributors. Make eye contact with the quiet analyst, the new hire, the contractor who hasn’t spoken yet. Your attention signals whose contributions you value.
Sometimes just get out of the way. Stop talking. Give others space. If you’re the most senior person and you’ve spoken three times while others have spoken once or not at all, you’re probably dominating even if you don’t feel like you are.
I’ve started doing something deliberate in architecture discussions: I ask a question to frame the problem, then I explicitly say “I’m going to listen for the first 15 minutes and not share my thoughts yet.” It feels uncomfortable—I usually have opinions and I’m used to sharing them. But the quality of the solutions that emerge when I’m not steering the conversation is consistently better than what I would have proposed initially.
Build genuine ownership through participatory design. Don’t ask for input on your pre-decided solution—that’s performative participation. Create actual joint solution development where the outcome isn’t predetermined. Be willing to implement approaches that differ from what you would have chosen if people genuinely helped design them and they’ll work in your context.
Develop champions, don’t just appoint them. Champions need belief in the innovation, organizational support, time to help others, experience implementing what they’re championing, and self-efficacy in leading change. You can’t just tap someone’s shoulder and expect them to magically become an effective change agent.
Run pathfinder projects before mandates. Let teams experiment with new approaches in low-risk contexts. Make results visible to others. Create opportunities for interested teams to try things without committing to wholesale adoption. NASA’s approach to Model-Based Systems Engineering provides the template: pathfinders demonstrate value, reduce perceived risk, and create internal proof points.
Respect the evidence-based culture of technical teams. Engineers want to see proof that something works before investing effort. Don’t appeal to authority or best practices—run experiments, collect data, show concrete outcomes. When adopting a new tool, the question “has anyone actually measured the improvement?” should have a data-driven answer.
I’ve used these patterns to implement changes ranging from adopting dbt to building data contracts to migrating cloud platforms to establishing data quality frameworks. They work consistently—not because they’re revolutionary, but because they align with how humans actually engage with ideas and change.
The fundamental insight is this: process shapes ownership, ownership shapes commitment, and commitment shapes outcomes.
The compounding effects you actually see
Here’s what I’ve observed when teams get this right versus when they don’t.
High psychological safety teams surface problems early before they become critical, explore diverse solution approaches, help each other debug issues without fear of judgment, openly discuss failures and what was learned, share knowledge freely across team boundaries, try new tools and approaches without waiting for mandates, and actively participate in architectural decisions.
Low psychological safety teams hide problems until they explode, implement the first obvious solution without exploring alternatives, avoid asking for help, repeat the same mistakes, hoard knowledge as job security, wait for explicit direction before trying anything new, and defer all decisions to the most senior person.
The performance gap is enormous. Google’s Project Aristotle found that teams with low psychological safety show measurable declines in delivery throughput, recovery time, and change failure rates.
In data teams specifically, I’ve seen the impacts manifest as: technical debt accumulation (engineers fear reporting that shortcuts were taken), narrow solution spaces (teams explore fewer alternatives because suggesting unconventional approaches feels risky), knowledge silos (learning isn’t shared because it provides competitive advantage within the team), and review avoidance (discomfort with feedback prevents thorough code reviews and pair programming).
The mechanisms reinforce each other. Psychological safety enables idea sharing. Quantity targets reduce remaining evaluation apprehension. Removing hierarchy during generation eliminates authority-triggered self-censorship. Participation creates ownership. Ownership drives championing. Champions spread adoption. Successful adoption reinforces psychological safety for the next change.
Each mechanism matters independently, but the compounding effects explain why some teams achieve dramatically better outcomes than others.
I watched this play out when comparing two data teams within the same company. Both had similar talent, similar tools, similar organizational challenges. But one team had a leader who’d internalized these principles, and one hadn’t.
The first team ran weekly “all ideas” sessions when facing problems. They explicitly managed hierarchy—senior people sometimes sat out of initial brainstorming. They built solutions collaboratively. They developed internal champions who helped spread new practices. When they adopted new tools, it was because teams had experimented and demonstrated value.
The second team operated top-down. The lead made architectural decisions and communicated them. People implemented what was asked. Adoption of new practices happened when mandated. Problems surfaced late because people were reluctant to admit issues.
After 18 months, the performance gap was stark. The first team delivered more features, had higher data quality metrics, lower on-call burden, better retention, and higher engagement scores. The second team struggled with all of these, and turnover was high.
The difference wasn’t the people—it was the process for how ideas were generated, evaluated, and implemented.
What this means for your team Monday morning
You don’t need to transform your entire organization overnight. You can start applying these principles immediately in small ways.
Next time you’re planning a significant change—adopting a new tool, migrating architecture, implementing new processes—try this:
Before announcing the approach, bring the team together and ask them to generate solutions. Set a specific quantity target high enough to make people uncomfortable. Ask for “all ideas” not “best ideas.” Explicitly suspend judgment during generation.
If there’s significant hierarchy in the room, manage it. Have senior people contribute last, or run initial brainstorming without them present. Try anonymous collection methods.
Take the ideas seriously. Not every idea will be implementable, but engage with them genuinely. Build the final solution collaboratively from the ideas generated, not from your pre-decided approach with some team input sprinkled on top.
Identify potential champions—people who believe in the solution (because they helped create it), have experience with the domain, are willing to help others, and are trusted by peers. Support them with time, resources, and organizational backing.
Run pathfinder projects before mandates. Let interested teams try the approach in low-risk contexts. Make results visible. Create proof points.
You’ll likely be surprised by both the quality of solutions generated and the enthusiasm for implementing them.
The research suggests not that technical teams need different approaches than other teams, but that they may be especially well-suited to participatory methods when those methods are implemented authentically rather than performatively. Evidence-based culture, distributed expertise, and autonomy expectations all align with genuine collaborative ideation.
But the key word is genuine. Teams can smell performative participation from a mile away. If you’re asking for input but have already decided the answer, people know. If you’re creating the appearance of collaboration while maintaining top-down control, it’s worse than just being explicitly directive.
The fundamental shift is from “how do I get people to adopt my solution?” to “how do I create conditions where we develop better solutions together?”
That shift is uncomfortable for many leaders. It requires genuine sharing of control, tolerance for messiness in the process, and comfort with outcomes that differ from what you would have chosen alone.
But if you want your data team to generate better ideas, adopt changes more willingly, and sustain improvements longer—the research is clear about what works.
The question isn’t whether these approaches are effective. The question is whether you’re willing to use them.
